[jira] [Work logged] (HIVE-26221) Add histogram-based column statistics
[ https://issues.apache.org/jira/browse/HIVE-26221?focusedWorklogId=810254&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810254 ] ASF GitHub Bot logged work on HIVE-26221: - Author: ASF GitHub Bot Created on: 20/Sep/22 05:45 Start Date: 20/Sep/22 05:45 Worklog Time Spent: 10m Work Description: asolimando commented on PR #3137: URL: https://github.com/apache/hive/pull/3137#issuecomment-1251868600 Please keep the PR open Issue Time Tracking --- Worklog Id: (was: 810254) Time Spent: 20m (was: 10m) > Add histogram-based column statistics > - > > Key: HIVE-26221 > URL: https://issues.apache.org/jira/browse/HIVE-26221 > Project: Hive > Issue Type: Improvement > Components: CBO, Metastore, Statistics >Affects Versions: 4.0.0-alpha-2 >Reporter: Alessandro Solimando >Assignee: Alessandro Solimando >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Hive does not support histogram statistics, which are particularly useful for > skewed data (which is very common in practice) and range predicates. > Hive's current selectivity estimation for range predicates is based on a > hard-coded value of 1/3 (see > [FilterSelectivityEstimator.java#L138-L144|https://github.com/apache/hive/blob/56c336268ea8c281d23c22d89271af37cb7e2572/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/FilterSelectivityEstimator.java#L138-L144]).]) > The current proposal aims at integrating histogram as an additional column > statistics, stored into the Hive metastore at the table (or partition) level. > The main requirements for histogram integration are the following: > * efficiency: the approach must scale and support billions of rows > * merge-ability: partition-level histograms have to be merged to form > table-level histograms > * explicit and configurable trade-off between memory footprint and accuracy > Hive already integrates [KLL data > sketches|https://datasketches.apache.org/docs/KLL/KLLSketch.html] UDAF. > Datasketches are small, stateful programs that process massive data-streams > and can provide approximate answers, with mathematical guarantees, to > computationally difficult queries orders-of-magnitude faster than > traditional, exact methods. > We propose to use KLL, and more specifically the cumulative distribution > function (CDF), as the underlying data structure for our histogram statistics. > The current proposal targets numeric data types (float, integer and numeric > families) and temporal data types (date and timestamp). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26247) Filter out results 'show connectors' on HMS server-side
[ https://issues.apache.org/jira/browse/HIVE-26247?focusedWorklogId=810252&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810252 ] ASF GitHub Bot logged work on HIVE-26247: - Author: ASF GitHub Bot Created on: 20/Sep/22 05:23 Start Date: 20/Sep/22 05:23 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3545: URL: https://github.com/apache/hive/pull/3545#issuecomment-1251856244 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3545) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3545&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3545&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3545&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3545&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3545&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3545&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3545&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3545&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3545&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3545&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3545&resolved=false&types=CODE_SMELL) [17 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3545&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3545&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3545&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810252) Time Spent: 1h 40m (was: 1.5h) > Filter out results 'show connectors' on HMS server-side > --- > > Key: HIVE-26247 > URL: https://issues.apache.org/jira/browse/HIVE-26247 > Project: Hive > Issue Type: Sub-task >Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 >Reporter: zhangbutao >Assignee: zhangbutao >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-13353) SHOW COMPACTIONS should support filtering options
[ https://issues.apache.org/jira/browse/HIVE-13353?focusedWorklogId=810250&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810250 ] ASF GitHub Bot logged work on HIVE-13353: - Author: ASF GitHub Bot Created on: 20/Sep/22 05:11 Start Date: 20/Sep/22 05:11 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3608: URL: https://github.com/apache/hive/pull/3608#issuecomment-1251848310 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3608) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3608&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3608&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3608&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3608&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3608&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3608&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3608&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3608&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3608&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3608&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3608&resolved=false&types=CODE_SMELL) [10 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3608&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3608&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3608&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810250) Time Spent: 20m (was: 10m) > SHOW COMPACTIONS should support filtering options > - > > Key: HIVE-13353 > URL: https://issues.apache.org/jira/browse/HIVE-13353 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.3.0, 2.0.0 >Reporter: Eugene Koifman >Assignee: KIRTI RUGE >Priority: Major > Labels: pull-request-available > Attachments: HIVE-13353.01.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Since we now have historical information in SHOW COMPACTIONS the output can > easily become unwieldy. (e.g. 1000 partitions with 3 lines of history each) > this is a significant usability issue > Need to add ability to filter by db/table/partition > Perhaps would also be useful to filter by status -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26247) Filter out results 'show connectors' on HMS server-side
[ https://issues.apache.org/jira/browse/HIVE-26247?focusedWorklogId=810249&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810249 ] ASF GitHub Bot logged work on HIVE-26247: - Author: ASF GitHub Bot Created on: 20/Sep/22 04:44 Start Date: 20/Sep/22 04:44 Worklog Time Spent: 10m Work Description: zhangbutao commented on code in PR #3545: URL: https://github.com/apache/hive/pull/3545#discussion_r974878654 ## ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/metastore/filtercontext/DataConnectorFilterContext.java: ## @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.filtercontext; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import org.apache.hadoop.hive.ql.security.authorization.plugin.HiveOperationType; +import org.apache.hadoop.hive.ql.security.authorization.plugin.HivePrivilegeObject; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthorizableEvent; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthzInfo; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class DataConnectorFilterContext extends HiveMetaStoreAuthorizableEvent { + +private static final Logger LOG = LoggerFactory.getLogger(DataConnectorFilterContext.class); + +List connectors = null; + +public DataConnectorFilterContext(List connectors) { +super(null); +this.connectors = connectors; +getAuthzContext(); +} + +@Override +public HiveMetaStoreAuthzInfo getAuthzContext() { +HiveMetaStoreAuthzInfo ret = +new HiveMetaStoreAuthzInfo(preEventContext, HiveOperationType.QUERY, getInputHObjs(), getOutputHObjs(), null); +return ret; +} + +private List getInputHObjs() { +LOG.debug("==> DataConnectorFilterContext.getOutputHObjs()"); + +List ret = new ArrayList<>(); +for (String connector : connectors) { +HivePrivilegeObject.HivePrivilegeObjectType type = HivePrivilegeObject.HivePrivilegeObjectType.DATACONNECTOR; +HivePrivilegeObject.HivePrivObjectActionType objectActionType = +HivePrivilegeObject.HivePrivObjectActionType.OTHER; +HivePrivilegeObject hivePrivilegeObject = +new HivePrivilegeObject(type, null, connector, null, null, objectActionType, null, null); Review Comment: Good suggestion. But we lack of HMS api for getting all dataconnector objects and we can only all dataconnectors names by `HMSHandler::get_dataconnectors()`, so we can not set owner's name and owner-type in here. https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java#L2033-L2034 If we want do this, i think we can define a new HtMS api or define a new api in RawStore to get all dataconnector objects, like get all tables objects: https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RawStore.java#L599-L600 I think we can do this in following task. wdyt? @saihemanth-cloudera Thank you. Issue Time Tracking --- Worklog Id: (was: 810249) Time Spent: 1.5h (was: 1h 20m) > Filter out results 'show connectors' on HMS server-side > --- > > Key: HIVE-26247 > URL: https://issues.apache.org/jira/browse/HIVE-26247 > Project: Hive > Issue Type: Sub-task >Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 >Reporter: zhangbutao >Assignee: zhangbutao >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26247) Filter out results 'show connectors' on HMS server-side
[ https://issues.apache.org/jira/browse/HIVE-26247?focusedWorklogId=810247&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810247 ] ASF GitHub Bot logged work on HIVE-26247: - Author: ASF GitHub Bot Created on: 20/Sep/22 04:43 Start Date: 20/Sep/22 04:43 Worklog Time Spent: 10m Work Description: zhangbutao commented on code in PR #3545: URL: https://github.com/apache/hive/pull/3545#discussion_r974879486 ## ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/metastore/filtercontext/DataConnectorFilterContext.java: ## @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.filtercontext; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import org.apache.hadoop.hive.ql.security.authorization.plugin.HiveOperationType; +import org.apache.hadoop.hive.ql.security.authorization.plugin.HivePrivilegeObject; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthorizableEvent; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthzInfo; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class DataConnectorFilterContext extends HiveMetaStoreAuthorizableEvent { + +private static final Logger LOG = LoggerFactory.getLogger(DataConnectorFilterContext.class); + +List connectors = null; + +public DataConnectorFilterContext(List connectors) { +super(null); +this.connectors = connectors; +getAuthzContext(); +} + +@Override +public HiveMetaStoreAuthzInfo getAuthzContext() { +HiveMetaStoreAuthzInfo ret = +new HiveMetaStoreAuthzInfo(preEventContext, HiveOperationType.QUERY, getInputHObjs(), getOutputHObjs(), null); +return ret; +} + +private List getInputHObjs() { +LOG.debug("==> DataConnectorFilterContext.getOutputHObjs()"); + +List ret = new ArrayList<>(); +for (String connector : connectors) { +HivePrivilegeObject.HivePrivilegeObjectType type = HivePrivilegeObject.HivePrivilegeObjectType.DATACONNECTOR; +HivePrivilegeObject.HivePrivObjectActionType objectActionType = +HivePrivilegeObject.HivePrivObjectActionType.OTHER; +HivePrivilegeObject hivePrivilegeObject = +new HivePrivilegeObject(type, null, connector, null, null, objectActionType, null, null); +ret.add(hivePrivilegeObject); +} +LOG.debug("<== DataConnectorFilterContext.getOutputHObjs(): ret=" + ret); Review Comment: done. Thx Issue Time Tracking --- Worklog Id: (was: 810247) Time Spent: 1h 10m (was: 1h) > Filter out results 'show connectors' on HMS server-side > --- > > Key: HIVE-26247 > URL: https://issues.apache.org/jira/browse/HIVE-26247 > Project: Hive > Issue Type: Sub-task >Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 >Reporter: zhangbutao >Assignee: zhangbutao >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26247) Filter out results 'show connectors' on HMS server-side
[ https://issues.apache.org/jira/browse/HIVE-26247?focusedWorklogId=810248&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810248 ] ASF GitHub Bot logged work on HIVE-26247: - Author: ASF GitHub Bot Created on: 20/Sep/22 04:43 Start Date: 20/Sep/22 04:43 Worklog Time Spent: 10m Work Description: zhangbutao commented on code in PR #3545: URL: https://github.com/apache/hive/pull/3545#discussion_r974879578 ## ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/metastore/filtercontext/DataConnectorFilterContext.java: ## @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.filtercontext; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import org.apache.hadoop.hive.ql.security.authorization.plugin.HiveOperationType; +import org.apache.hadoop.hive.ql.security.authorization.plugin.HivePrivilegeObject; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthorizableEvent; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthzInfo; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class DataConnectorFilterContext extends HiveMetaStoreAuthorizableEvent { + +private static final Logger LOG = LoggerFactory.getLogger(DataConnectorFilterContext.class); + +List connectors = null; + +public DataConnectorFilterContext(List connectors) { +super(null); +this.connectors = connectors; +getAuthzContext(); +} + +@Override +public HiveMetaStoreAuthzInfo getAuthzContext() { +HiveMetaStoreAuthzInfo ret = +new HiveMetaStoreAuthzInfo(preEventContext, HiveOperationType.QUERY, getInputHObjs(), getOutputHObjs(), null); +return ret; +} + +private List getInputHObjs() { +LOG.debug("==> DataConnectorFilterContext.getOutputHObjs()"); Review Comment: done. thx Issue Time Tracking --- Worklog Id: (was: 810248) Time Spent: 1h 20m (was: 1h 10m) > Filter out results 'show connectors' on HMS server-side > --- > > Key: HIVE-26247 > URL: https://issues.apache.org/jira/browse/HIVE-26247 > Project: Hive > Issue Type: Sub-task >Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 >Reporter: zhangbutao >Assignee: zhangbutao >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26247) Filter out results 'show connectors' on HMS server-side
[ https://issues.apache.org/jira/browse/HIVE-26247?focusedWorklogId=810246&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810246 ] ASF GitHub Bot logged work on HIVE-26247: - Author: ASF GitHub Bot Created on: 20/Sep/22 04:41 Start Date: 20/Sep/22 04:41 Worklog Time Spent: 10m Work Description: zhangbutao commented on code in PR #3545: URL: https://github.com/apache/hive/pull/3545#discussion_r974878654 ## ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/metastore/filtercontext/DataConnectorFilterContext.java: ## @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.filtercontext; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import org.apache.hadoop.hive.ql.security.authorization.plugin.HiveOperationType; +import org.apache.hadoop.hive.ql.security.authorization.plugin.HivePrivilegeObject; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthorizableEvent; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthzInfo; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class DataConnectorFilterContext extends HiveMetaStoreAuthorizableEvent { + +private static final Logger LOG = LoggerFactory.getLogger(DataConnectorFilterContext.class); + +List connectors = null; + +public DataConnectorFilterContext(List connectors) { +super(null); +this.connectors = connectors; +getAuthzContext(); +} + +@Override +public HiveMetaStoreAuthzInfo getAuthzContext() { +HiveMetaStoreAuthzInfo ret = +new HiveMetaStoreAuthzInfo(preEventContext, HiveOperationType.QUERY, getInputHObjs(), getOutputHObjs(), null); +return ret; +} + +private List getInputHObjs() { +LOG.debug("==> DataConnectorFilterContext.getOutputHObjs()"); + +List ret = new ArrayList<>(); +for (String connector : connectors) { +HivePrivilegeObject.HivePrivilegeObjectType type = HivePrivilegeObject.HivePrivilegeObjectType.DATACONNECTOR; +HivePrivilegeObject.HivePrivObjectActionType objectActionType = +HivePrivilegeObject.HivePrivObjectActionType.OTHER; +HivePrivilegeObject hivePrivilegeObject = +new HivePrivilegeObject(type, null, connector, null, null, objectActionType, null, null); Review Comment: Good suggestion. But we lack of HMS api for getting all dataconnector objects and we can only all dataconnectors names by `HMSHandler::get_dataconnectors()`, so we can not set owner's name and owner-type in here. https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java#L2033-L2034 If we want do this, i think we can define a new HtMS api or define a new api in RawStore to get all dataconnector objects, like get all tables objects: https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RawStore.java#L609-L610 I think we can do this in following task. wdyt? @saihemanth-cloudera Thank you. Issue Time Tracking --- Worklog Id: (was: 810246) Time Spent: 1h (was: 50m) > Filter out results 'show connectors' on HMS server-side > --- > > Key: HIVE-26247 > URL: https://issues.apache.org/jira/browse/HIVE-26247 > Project: Hive > Issue Type: Sub-task >Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 >Reporter: zhangbutao >Assignee: zhangbutao >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26537) Deprecate older APIs in the HMS
[ https://issues.apache.org/jira/browse/HIVE-26537?focusedWorklogId=810245&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810245 ] ASF GitHub Bot logged work on HIVE-26537: - Author: ASF GitHub Bot Created on: 20/Sep/22 04:16 Start Date: 20/Sep/22 04:16 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3599: URL: https://github.com/apache/hive/pull/3599#issuecomment-1251814646 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3599) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=BUG) [3 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3599&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3599&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3599&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=CODE_SMELL) [95 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3599&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3599&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810245) Time Spent: 1h 50m (was: 1h 40m) > Deprecate older APIs in the HMS > --- > > Key: HIVE-26537 > URL: https://issues.apache.org/jira/browse/HIVE-26537 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 >Reporter: Sai Hemanth Gantasala >Assignee: Sai Hemanth Gantasala >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > This Jira is to track the clean-up(deprecate older APIs and point the HMS > client to the newer APIs) work in the hive metastore server. > More details will be added here soon. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-13353) SHOW COMPACTIONS should support filtering options
[ https://issues.apache.org/jira/browse/HIVE-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-13353: -- Labels: pull-request-available (was: ) > SHOW COMPACTIONS should support filtering options > - > > Key: HIVE-13353 > URL: https://issues.apache.org/jira/browse/HIVE-13353 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.3.0, 2.0.0 >Reporter: Eugene Koifman >Assignee: KIRTI RUGE >Priority: Major > Labels: pull-request-available > Attachments: HIVE-13353.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Since we now have historical information in SHOW COMPACTIONS the output can > easily become unwieldy. (e.g. 1000 partitions with 3 lines of history each) > this is a significant usability issue > Need to add ability to filter by db/table/partition > Perhaps would also be useful to filter by status -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-13353) SHOW COMPACTIONS should support filtering options
[ https://issues.apache.org/jira/browse/HIVE-13353?focusedWorklogId=810241&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810241 ] ASF GitHub Bot logged work on HIVE-13353: - Author: ASF GitHub Bot Created on: 20/Sep/22 03:45 Start Date: 20/Sep/22 03:45 Worklog Time Spent: 10m Work Description: rkirtir opened a new pull request, #3608: URL: https://github.com/apache/hive/pull/3608 ### What changes were proposed in this pull request? https://issues.apache.org/jira/browse/HIVE-13353 Since we now have historical information in SHOW COMPACTIONS the output can easily become unwieldy. (e.g. 1000 partitions with 3 lines of history each) this is a significant usability issue Need to add ability to filter by db/table/partition Perhaps would also be useful to filter by status ### Why are the changes needed? We are now giving flexibility to filter compactions on db,table,partition,type state etc. ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? junit Issue Time Tracking --- Worklog Id: (was: 810241) Remaining Estimate: 0h Time Spent: 10m > SHOW COMPACTIONS should support filtering options > - > > Key: HIVE-13353 > URL: https://issues.apache.org/jira/browse/HIVE-13353 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.3.0, 2.0.0 >Reporter: Eugene Koifman >Assignee: KIRTI RUGE >Priority: Major > Attachments: HIVE-13353.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Since we now have historical information in SHOW COMPACTIONS the output can > easily become unwieldy. (e.g. 1000 partitions with 3 lines of history each) > this is a significant usability issue > Need to add ability to filter by db/table/partition > Perhaps would also be useful to filter by status -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-25790) Make managed table copies handle updates (FileUtils)
[ https://issues.apache.org/jira/browse/HIVE-25790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606858#comment-17606858 ] Teddy Choi commented on HIVE-25790: --- The Jenkins tests passed. > Make managed table copies handle updates (FileUtils) > > > Key: HIVE-25790 > URL: https://issues.apache.org/jira/browse/HIVE-25790 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26529) Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat
[ https://issues.apache.org/jira/browse/HIVE-26529?focusedWorklogId=810237&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810237 ] ASF GitHub Bot logged work on HIVE-26529: - Author: ASF GitHub Bot Created on: 20/Sep/22 03:25 Start Date: 20/Sep/22 03:25 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3607: URL: https://github.com/apache/hive/pull/3607#issuecomment-1251789560 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3607) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3607&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3607&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3607&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3607&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3607&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3607&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3607&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3607&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3607&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3607&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3607&resolved=false&types=CODE_SMELL) [9 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3607&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3607&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3607&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810237) Time Spent: 0.5h (was: 20m) > Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat > --- > > Key: HIVE-26529 > URL: https://issues.apache.org/jira/browse/HIVE-26529 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Rajesh Balamohan >Assignee: Ayush Saxena >Priority: Major > Labels: perfomance, pull-request-available > Attachments: iceberg_table_with_HiveDecimal.png, > regular_tables_with_decimal64.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > For supporting vectored reads in parquet, DECIMAL_64 support in ORC has been > disabled in HiveIcebergInputFormat. This causes regressions in queries. > [https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergInputFormat.java#L182] > It will be good to restore DECIMAL_64 support in iceberg input format. > -- This message was sent by Atlassian Jira (v8.20.
[jira] [Work logged] (HIVE-26509) Introduce dynamic leader election in HMS
[ https://issues.apache.org/jira/browse/HIVE-26509?focusedWorklogId=810236&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810236 ] ASF GitHub Bot logged work on HIVE-26509: - Author: ASF GitHub Bot Created on: 20/Sep/22 02:32 Start Date: 20/Sep/22 02:32 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3567: URL: https://github.com/apache/hive/pull/3567#issuecomment-1251762853 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3567) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3567&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3567&resolved=false&types=BUG) [12 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3567&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3567&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3567&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3567&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3567&resolved=false&types=SECURITY_HOTSPOT) [![E](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E-16px.png 'E')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3567&resolved=false&types=SECURITY_HOTSPOT) [1 Security Hotspot](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3567&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3567&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3567&resolved=false&types=CODE_SMELL) [72 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3567&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3567&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3567&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810236) Time Spent: 1h 20m (was: 1h 10m) > Introduce dynamic leader election in HMS > > > Key: HIVE-26509 > URL: https://issues.apache.org/jira/browse/HIVE-26509 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore >Reporter: Zhihua Deng >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > From HIVE-21841 we have a leader HMS selected by configuring > metastore.housekeeping.leader.hostname on startup. This approach saves us > from running duplicated HMS's housekeeping tasks cluster-wide. > In this jira, we introduce another dynamic leader election: adopt hive lock > to implement the leader election. Once a HMS owns the lock, then it becomes > the leader, carries out the housekeeping tasks, and sends heartbeats to renew > the lock before timeout. If the leader fails to reclaim the lock, then stops > the already started tasks if it has, the electing event is audited. We can > achieve a more dyna
[jira] [Work logged] (HIVE-26221) Add histogram-based column statistics
[ https://issues.apache.org/jira/browse/HIVE-26221?focusedWorklogId=810209&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810209 ] ASF GitHub Bot logged work on HIVE-26221: - Author: ASF GitHub Bot Created on: 20/Sep/22 00:27 Start Date: 20/Sep/22 00:27 Worklog Time Spent: 10m Work Description: github-actions[bot] commented on PR #3137: URL: https://github.com/apache/hive/pull/3137#issuecomment-1251703711 This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Feel free to reach out on the d...@hive.apache.org list if the patch is in need of reviews. Issue Time Tracking --- Worklog Id: (was: 810209) Remaining Estimate: 0h Time Spent: 10m > Add histogram-based column statistics > - > > Key: HIVE-26221 > URL: https://issues.apache.org/jira/browse/HIVE-26221 > Project: Hive > Issue Type: Improvement > Components: CBO, Metastore, Statistics >Affects Versions: 4.0.0-alpha-2 >Reporter: Alessandro Solimando >Assignee: Alessandro Solimando >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Hive does not support histogram statistics, which are particularly useful for > skewed data (which is very common in practice) and range predicates. > Hive's current selectivity estimation for range predicates is based on a > hard-coded value of 1/3 (see > [FilterSelectivityEstimator.java#L138-L144|https://github.com/apache/hive/blob/56c336268ea8c281d23c22d89271af37cb7e2572/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/FilterSelectivityEstimator.java#L138-L144]).]) > The current proposal aims at integrating histogram as an additional column > statistics, stored into the Hive metastore at the table (or partition) level. > The main requirements for histogram integration are the following: > * efficiency: the approach must scale and support billions of rows > * merge-ability: partition-level histograms have to be merged to form > table-level histograms > * explicit and configurable trade-off between memory footprint and accuracy > Hive already integrates [KLL data > sketches|https://datasketches.apache.org/docs/KLL/KLLSketch.html] UDAF. > Datasketches are small, stateful programs that process massive data-streams > and can provide approximate answers, with mathematical guarantees, to > computationally difficult queries orders-of-magnitude faster than > traditional, exact methods. > We propose to use KLL, and more specifically the cumulative distribution > function (CDF), as the underlying data structure for our histogram statistics. > The current proposal targets numeric data types (float, integer and numeric > families) and temporal data types (date and timestamp). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26221) Add histogram-based column statistics
[ https://issues.apache.org/jira/browse/HIVE-26221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-26221: -- Labels: pull-request-available (was: ) > Add histogram-based column statistics > - > > Key: HIVE-26221 > URL: https://issues.apache.org/jira/browse/HIVE-26221 > Project: Hive > Issue Type: Improvement > Components: CBO, Metastore, Statistics >Affects Versions: 4.0.0-alpha-2 >Reporter: Alessandro Solimando >Assignee: Alessandro Solimando >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Hive does not support histogram statistics, which are particularly useful for > skewed data (which is very common in practice) and range predicates. > Hive's current selectivity estimation for range predicates is based on a > hard-coded value of 1/3 (see > [FilterSelectivityEstimator.java#L138-L144|https://github.com/apache/hive/blob/56c336268ea8c281d23c22d89271af37cb7e2572/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/FilterSelectivityEstimator.java#L138-L144]).]) > The current proposal aims at integrating histogram as an additional column > statistics, stored into the Hive metastore at the table (or partition) level. > The main requirements for histogram integration are the following: > * efficiency: the approach must scale and support billions of rows > * merge-ability: partition-level histograms have to be merged to form > table-level histograms > * explicit and configurable trade-off between memory footprint and accuracy > Hive already integrates [KLL data > sketches|https://datasketches.apache.org/docs/KLL/KLLSketch.html] UDAF. > Datasketches are small, stateful programs that process massive data-streams > and can provide approximate answers, with mathematical guarantees, to > computationally difficult queries orders-of-magnitude faster than > traditional, exact methods. > We propose to use KLL, and more specifically the cumulative distribution > function (CDF), as the underlying data structure for our histogram statistics. > The current proposal targets numeric data types (float, integer and numeric > families) and temporal data types (date and timestamp). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-25621) Alter table partition compact/concatenate commands should send HivePrivilegeObjects for Authz
[ https://issues.apache.org/jira/browse/HIVE-25621?focusedWorklogId=810206&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810206 ] ASF GitHub Bot logged work on HIVE-25621: - Author: ASF GitHub Bot Created on: 19/Sep/22 23:48 Start Date: 19/Sep/22 23:48 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #2731: URL: https://github.com/apache/hive/pull/2731#issuecomment-1251684763 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=2731) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=2731&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=2731&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=2731&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=2731&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=2731&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=2731&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=2731&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=2731&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=2731&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=2731&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=2731&resolved=false&types=CODE_SMELL) [14 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=2731&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=2731&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=2731&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810206) Time Spent: 4h 40m (was: 4.5h) > Alter table partition compact/concatenate commands should send > HivePrivilegeObjects for Authz > - > > Key: HIVE-25621 > URL: https://issues.apache.org/jira/browse/HIVE-25621 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Sai Hemanth Gantasala >Assignee: Sai Hemanth Gantasala >Priority: Major > Labels: pull-request-available > Time Spent: 4h 40m > Remaining Estimate: 0h > > # Run the following queries > Create table temp(c0 int) partitioned by (c1 int); > Insert into temp values(1,1); > ALTER TABLE temp PARTITION (c1=1) COMPACT 'minor'; > ALTER TABLE temp PARTITION (c1=1) CONCATENATE; > Insert into temp values(1,1); > # The above compact/concatenate commands are currently not sending any hive > privilege objects for authorization. Hive needs to send these objects to > avoid malicious users doing any operation. -- This message was sent by Atla
[jira] [Work logged] (HIVE-26529) Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat
[ https://issues.apache.org/jira/browse/HIVE-26529?focusedWorklogId=810197&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810197 ] ASF GitHub Bot logged work on HIVE-26529: - Author: ASF GitHub Bot Created on: 19/Sep/22 23:08 Start Date: 19/Sep/22 23:08 Worklog Time Spent: 10m Work Description: rbalamohan commented on code in PR #3607: URL: https://github.com/apache/hive/pull/3607#discussion_r974755315 ## iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergInputFormat.java: ## @@ -176,10 +179,27 @@ public boolean shouldSkipCombine(Path path, Configuration conf) { } @Override - public VectorizedSupport.Support[] getSupportedFeatures() { + public VectorizedSupport.Support[] getSupportedFeatures(HiveConf hiveConf, Set tableNames) { // disabling VectorizedSupport.Support.DECIMAL_64 as Parquet doesn't support it, and we have no way of telling // beforehand what kind of file format we're going to hit later -return new VectorizedSupport.Support[]{ }; +boolean onlyOrcFiles = true; +try { + Hive hiveDb = Hive.get(hiveConf, false); Review Comment: Is it possible to make this decision in Vectorizer::getVectorizedInputFormatSupports? Basically trying to avoid the mixup of HMS related information in InputFormats, as they will be used in AppMasters and tasks. I agree that getSupportedFeatures will be called mainly during compilation, but it will be good to keep HMS interactions outside of inputformats. Issue Time Tracking --- Worklog Id: (was: 810197) Time Spent: 20m (was: 10m) > Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat > --- > > Key: HIVE-26529 > URL: https://issues.apache.org/jira/browse/HIVE-26529 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Rajesh Balamohan >Assignee: Ayush Saxena >Priority: Major > Labels: perfomance, pull-request-available > Attachments: iceberg_table_with_HiveDecimal.png, > regular_tables_with_decimal64.png > > Time Spent: 20m > Remaining Estimate: 0h > > For supporting vectored reads in parquet, DECIMAL_64 support in ORC has been > disabled in HiveIcebergInputFormat. This causes regressions in queries. > [https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergInputFormat.java#L182] > It will be good to restore DECIMAL_64 support in iceberg input format. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26529) Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat
[ https://issues.apache.org/jira/browse/HIVE-26529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-26529: -- Labels: perfomance pull-request-available (was: perfomance) > Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat > --- > > Key: HIVE-26529 > URL: https://issues.apache.org/jira/browse/HIVE-26529 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Rajesh Balamohan >Assignee: Ayush Saxena >Priority: Major > Labels: perfomance, pull-request-available > Attachments: iceberg_table_with_HiveDecimal.png, > regular_tables_with_decimal64.png > > Time Spent: 10m > Remaining Estimate: 0h > > For supporting vectored reads in parquet, DECIMAL_64 support in ORC has been > disabled in HiveIcebergInputFormat. This causes regressions in queries. > [https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergInputFormat.java#L182] > It will be good to restore DECIMAL_64 support in iceberg input format. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26529) Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat
[ https://issues.apache.org/jira/browse/HIVE-26529?focusedWorklogId=810187&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810187 ] ASF GitHub Bot logged work on HIVE-26529: - Author: ASF GitHub Bot Created on: 19/Sep/22 21:51 Start Date: 19/Sep/22 21:51 Worklog Time Spent: 10m Work Description: ayushtkn opened a new pull request, #3607: URL: https://github.com/apache/hive/pull/3607 ### What changes were proposed in this pull request? Temporary Hack to allow for non mixed ORC tables to support Vectorization ### How was this patch tested? Added Explain vectorization at relevant places Issue Time Tracking --- Worklog Id: (was: 810187) Remaining Estimate: 0h Time Spent: 10m > Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat > --- > > Key: HIVE-26529 > URL: https://issues.apache.org/jira/browse/HIVE-26529 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Rajesh Balamohan >Assignee: Ayush Saxena >Priority: Major > Labels: perfomance > Attachments: iceberg_table_with_HiveDecimal.png, > regular_tables_with_decimal64.png > > Time Spent: 10m > Remaining Estimate: 0h > > For supporting vectored reads in parquet, DECIMAL_64 support in ORC has been > disabled in HiveIcebergInputFormat. This causes regressions in queries. > [https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergInputFormat.java#L182] > It will be good to restore DECIMAL_64 support in iceberg input format. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-26529) Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat
[ https://issues.apache.org/jira/browse/HIVE-26529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena reassigned HIVE-26529: --- Assignee: Ayush Saxena > Fix VectorizedSupport support for DECIMAL_64 in HiveIcebergInputFormat > --- > > Key: HIVE-26529 > URL: https://issues.apache.org/jira/browse/HIVE-26529 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Rajesh Balamohan >Assignee: Ayush Saxena >Priority: Major > Labels: perfomance > Attachments: iceberg_table_with_HiveDecimal.png, > regular_tables_with_decimal64.png > > > For supporting vectored reads in parquet, DECIMAL_64 support in ORC has been > disabled in HiveIcebergInputFormat. This causes regressions in queries. > [https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergInputFormat.java#L182] > It will be good to restore DECIMAL_64 support in iceberg input format. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26496) FetchOperator scans delete_delta folders multiple times causing slowness
[ https://issues.apache.org/jira/browse/HIVE-26496?focusedWorklogId=810167&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810167 ] ASF GitHub Bot logged work on HIVE-26496: - Author: ASF GitHub Bot Created on: 19/Sep/22 19:49 Start Date: 19/Sep/22 19:49 Worklog Time Spent: 10m Work Description: difin commented on code in PR #3559: URL: https://github.com/apache/hive/pull/3559#discussion_r974613822 ## ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSplit.java: ## @@ -104,7 +103,15 @@ public OrcSplit(Path path, Object fileId, long offset, long length, String[] hos this.isOriginal = isOriginal; this.hasBase = hasBase; this.rootDir = rootDir; -this.deltas.addAll(filterDeltasByBucketId(deltas, AcidUtils.parseBucketId(path))); +int bucketId = AcidUtils.parseBucketId(path); Review Comment: Hi @deniskuzZ @zabetak Fixed in PR https://github.com/apache/hive/pull/3606. Can you please review the new PR and merge, if you find it is good? Issue Time Tracking --- Worklog Id: (was: 810167) Time Spent: 9h 10m (was: 9h) > FetchOperator scans delete_delta folders multiple times causing slowness > > > Key: HIVE-26496 > URL: https://issues.apache.org/jira/browse/HIVE-26496 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Rajesh Balamohan >Assignee: Dmitriy Fingerman >Priority: Major > Labels: pull-request-available > Time Spent: 9h 10m > Remaining Estimate: 0h > > FetchOperator scans way too many number of files/directories than needed. > For e.g here is a layout of a table which had set of updates and deletes. > There are set of "delta" and "delete_delta" folders which are created. > {noformat} > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/base_001 > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_002_002_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_003_003_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_004_004_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_005_005_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_006_006_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_007_007_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_008_008_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_009_009_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_010_010_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_011_011_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_012_012_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_013_013_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_014_014_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_015_015_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_016_016_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_017_017_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_018_018_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_019_019_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_020_020_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_021_021_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_022_022_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_002_002_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_003_003_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_004_004_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_005_005_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_006_006_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_007_007_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_00
[jira] [Work logged] (HIVE-26496) FetchOperator scans delete_delta folders multiple times causing slowness
[ https://issues.apache.org/jira/browse/HIVE-26496?focusedWorklogId=810165&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810165 ] ASF GitHub Bot logged work on HIVE-26496: - Author: ASF GitHub Bot Created on: 19/Sep/22 19:48 Start Date: 19/Sep/22 19:48 Worklog Time Spent: 10m Work Description: difin commented on code in PR #3559: URL: https://github.com/apache/hive/pull/3559#discussion_r974613822 ## ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSplit.java: ## @@ -104,7 +103,15 @@ public OrcSplit(Path path, Object fileId, long offset, long length, String[] hos this.isOriginal = isOriginal; this.hasBase = hasBase; this.rootDir = rootDir; -this.deltas.addAll(filterDeltasByBucketId(deltas, AcidUtils.parseBucketId(path))); +int bucketId = AcidUtils.parseBucketId(path); Review Comment: Hi @deniskuzZ @zabetak Fixed in PR https://github.com/apache/hive/pull/3606 Issue Time Tracking --- Worklog Id: (was: 810165) Time Spent: 9h (was: 8h 50m) > FetchOperator scans delete_delta folders multiple times causing slowness > > > Key: HIVE-26496 > URL: https://issues.apache.org/jira/browse/HIVE-26496 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Rajesh Balamohan >Assignee: Dmitriy Fingerman >Priority: Major > Labels: pull-request-available > Time Spent: 9h > Remaining Estimate: 0h > > FetchOperator scans way too many number of files/directories than needed. > For e.g here is a layout of a table which had set of updates and deletes. > There are set of "delta" and "delete_delta" folders which are created. > {noformat} > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/base_001 > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_002_002_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_003_003_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_004_004_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_005_005_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_006_006_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_007_007_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_008_008_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_009_009_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_010_010_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_011_011_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_012_012_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_013_013_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_014_014_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_015_015_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_016_016_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_017_017_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_018_018_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_019_019_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_020_020_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_021_021_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_022_022_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_002_002_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_003_003_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_004_004_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_005_005_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_006_006_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_007_007_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_008_008_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test
[jira] [Work logged] (HIVE-26247) Filter out results 'show connectors' on HMS server-side
[ https://issues.apache.org/jira/browse/HIVE-26247?focusedWorklogId=810161&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810161 ] ASF GitHub Bot logged work on HIVE-26247: - Author: ASF GitHub Bot Created on: 19/Sep/22 19:14 Start Date: 19/Sep/22 19:14 Worklog Time Spent: 10m Work Description: saihemanth-cloudera commented on code in PR #3545: URL: https://github.com/apache/hive/pull/3545#discussion_r974586696 ## ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/metastore/filtercontext/DataConnectorFilterContext.java: ## @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.filtercontext; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import org.apache.hadoop.hive.ql.security.authorization.plugin.HiveOperationType; +import org.apache.hadoop.hive.ql.security.authorization.plugin.HivePrivilegeObject; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthorizableEvent; +import org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.HiveMetaStoreAuthzInfo; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class DataConnectorFilterContext extends HiveMetaStoreAuthorizableEvent { + +private static final Logger LOG = LoggerFactory.getLogger(DataConnectorFilterContext.class); + +List connectors = null; + +public DataConnectorFilterContext(List connectors) { +super(null); +this.connectors = connectors; +getAuthzContext(); +} + +@Override +public HiveMetaStoreAuthzInfo getAuthzContext() { +HiveMetaStoreAuthzInfo ret = +new HiveMetaStoreAuthzInfo(preEventContext, HiveOperationType.QUERY, getInputHObjs(), getOutputHObjs(), null); +return ret; +} + +private List getInputHObjs() { +LOG.debug("==> DataConnectorFilterContext.getOutputHObjs()"); + +List ret = new ArrayList<>(); +for (String connector : connectors) { +HivePrivilegeObject.HivePrivilegeObjectType type = HivePrivilegeObject.HivePrivilegeObjectType.DATACONNECTOR; +HivePrivilegeObject.HivePrivObjectActionType objectActionType = +HivePrivilegeObject.HivePrivObjectActionType.OTHER; +HivePrivilegeObject hivePrivilegeObject = +new HivePrivilegeObject(type, null, connector, null, null, objectActionType, null, null); Review Comment: I think it would be nice to pass the owner's name and owner-type information in the privilege object. That would be useful for creating owner-related policies in ranger/sentry etc services. ## ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/metastore/filtercontext/DataConnectorFilterContext.java: ## @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.hadoop.hive.ql.security.authorization.plugin.metastore.filtercontext; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import org.apache.hadoop.hive.ql.security.authorization.plugin.HiveOperationType; +import org.apache.hadoop.hive.ql.security.authorization.plugin.HivePrivilegeObject; +import org.
[jira] [Assigned] (HIVE-23744) Reduce query startup latency
[ https://issues.apache.org/jira/browse/HIVE-23744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Fingerman reassigned HIVE-23744: Assignee: Dmitriy Fingerman (was: Mustafa İman) > Reduce query startup latency > > > Key: HIVE-23744 > URL: https://issues.apache.org/jira/browse/HIVE-23744 > Project: Hive > Issue Type: Task > Components: llap >Affects Versions: 4.0.0 >Reporter: Mustafa İman >Assignee: Dmitriy Fingerman >Priority: Major > Attachments: am_schedule_and_transmit.png, task_start.png > > > When I run queries with large number of tasks for a single vertex, I see a > significant delay before all tasks start execution in llap daemons. > Although llap daemons have the free capacity to run the tasks, it takes a > significant time to schedule all the tasks in AM and actually transmit them > to executors. > "am_schedule_and_transmit" shows scheduling of tasks of tpcds query 55. It > shows only the tasks scheduled for one of 10 llap daemons. The scheduler > works in a single thread, scheduling tasks one by one. A delay in scheduling > of one task, delays all the tasks. > !am_schedule_and_transmit.png|width=831,height=573! > > Another issue is that it takes long time to fill all the execution slots in > llap daemons even though they are all empty initially. This is caused by > LlapTaskCommunicator using a fixed number of threads (10 by default) to send > the tasks to daemons. Also this communication is synchronized so these > threads block communication staying idle. "task_start.png" shows running > tasks on an llap daemon that has 12 execution slots. By the time 12th task > starts running, more than 100ms already passes. That slot stays idle all this > time. > !task_start.png|width=1166,height=635! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-14514) OrcRecordUpdater should clone writerOptions when creating delete event writers
[ https://issues.apache.org/jira/browse/HIVE-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Fingerman reassigned HIVE-14514: Assignee: Dmitriy Fingerman > OrcRecordUpdater should clone writerOptions when creating delete event writers > -- > > Key: HIVE-14514 > URL: https://issues.apache.org/jira/browse/HIVE-14514 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Saket Saurabh >Assignee: Dmitriy Fingerman >Priority: Critical > > When split-update is enabled for ACID, OrcRecordUpdater creates two sets of > writers: one for the insert deltas and one for the delete deltas. The > deleteEventWriter is initialized with similar writerOptions as the normal > writer, except that it has a different callback handler. Due to the lack of > copy constructor/ clone() method in writerOptions, the same writerOptions > object is mutated to specify a different callback for the delete case. > Although, this is harmless for now, but it may become a source of confusion > and possible error in future. The ideal way to fix this would be to create a > clone() method for writerOptions- however this requires that the parent class > of WriterOptions in the OrcFile.WriterOptions should implement Cloneable or > provide a copy constructor. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-24299) hive-ql guava versions and vulnerabilities
[ https://issues.apache.org/jira/browse/HIVE-24299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Fingerman reassigned HIVE-24299: Assignee: Dmitriy Fingerman > hive-ql guava versions and vulnerabilities > -- > > Key: HIVE-24299 > URL: https://issues.apache.org/jira/browse/HIVE-24299 > Project: Hive > Issue Type: Improvement > Components: hpl/sql >Affects Versions: 3.1.2 >Reporter: openlookeng >Assignee: Dmitriy Fingerman >Priority: Blocker > > hive-ql shades google's guava 19.0 component, but have vulnerabilities > CVE-2018-10237, do team have plan to update it ? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26537) Deprecate older APIs in the HMS
[ https://issues.apache.org/jira/browse/HIVE-26537?focusedWorklogId=810152&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810152 ] ASF GitHub Bot logged work on HIVE-26537: - Author: ASF GitHub Bot Created on: 19/Sep/22 18:39 Start Date: 19/Sep/22 18:39 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3599: URL: https://github.com/apache/hive/pull/3599#issuecomment-1251403532 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3599) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=BUG) [3 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3599&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3599&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3599&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=CODE_SMELL) [95 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3599&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3599&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3599&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810152) Time Spent: 1h 40m (was: 1.5h) > Deprecate older APIs in the HMS > --- > > Key: HIVE-26537 > URL: https://issues.apache.org/jira/browse/HIVE-26537 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 >Reporter: Sai Hemanth Gantasala >Assignee: Sai Hemanth Gantasala >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > This Jira is to track the clean-up(deprecate older APIs and point the HMS > client to the newer APIs) work in the hive metastore server. > More details will be added here soon. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-25848) Empty result for structs in point lookup optimization with vectorization on
[ https://issues.apache.org/jira/browse/HIVE-25848?focusedWorklogId=810149&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810149 ] ASF GitHub Bot logged work on HIVE-25848: - Author: ASF GitHub Bot Created on: 19/Sep/22 18:26 Start Date: 19/Sep/22 18:26 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3592: URL: https://github.com/apache/hive/pull/3592#issuecomment-1251387767 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3592) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3592&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3592&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3592&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=CODE_SMELL) [12 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3592&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3592&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810149) Time Spent: 1h 40m (was: 1.5h) > Empty result for structs in point lookup optimization with vectorization on > --- > > Key: HIVE-25848 > URL: https://issues.apache.org/jira/browse/HIVE-25848 > Project: Hive > Issue Type: Bug >Reporter: Ádám Szita >Assignee: Hankó Gergely >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > Repro steps: > {code:java} > set hive.fetch.task.conversion=none; > create table test (a string) partitioned by (y string, m string); > insert into test values ('aa', 2022, 1); > select * from test where (y=year(date_sub(current_date,4)) and > m=month(date_sub(current_date,4))) or (y=year(date_sub(current_date,10)) and > m=month(date_sub(current_date,10)) ); > --gives empty result{code} > Turning either of the feature below off yields to good result (1 row > expected): > {code:java} > set hive.optimize.point.lookup=false; > set hive.cbo.enable=false; > set hive.
[jira] [Work logged] (HIVE-25495) Upgrade to JLine3
[ https://issues.apache.org/jira/browse/HIVE-25495?focusedWorklogId=810139&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810139 ] ASF GitHub Bot logged work on HIVE-25495: - Author: ASF GitHub Bot Created on: 19/Sep/22 17:52 Start Date: 19/Sep/22 17:52 Worklog Time Spent: 10m Work Description: LA-Toth commented on PR #3069: URL: https://github.com/apache/hive/pull/3069#issuecomment-1251350826 with this change: diff --git a/beeline/src/test/org/apache/hive/beeline/cli/TestHiveCli.java b/beeline/src/test/org/apache/hive/beeline/cli/TestHiveCli.java index 5ea4d11b7a..f7be6875ee 100644 - Issue Time Tracking --- Worklog Id: (was: 810139) Time Spent: 4h (was: 3h 50m) > Upgrade to JLine3 > - > > Key: HIVE-25495 > URL: https://issues.apache.org/jira/browse/HIVE-25495 > Project: Hive > Issue Type: Improvement >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Labels: pull-request-available > Time Spent: 4h > Remaining Estimate: 0h > > Jline 2 has been discontinued a long while ago. Hadoop uses JLine3 so Hive > should match. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-25495) Upgrade to JLine3
[ https://issues.apache.org/jira/browse/HIVE-25495?focusedWorklogId=810128&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810128 ] ASF GitHub Bot logged work on HIVE-25495: - Author: ASF GitHub Bot Created on: 19/Sep/22 17:40 Start Date: 19/Sep/22 17:40 Worklog Time Spent: 10m Work Description: LA-Toth commented on PR #3069: URL: https://github.com/apache/hive/pull/3069#issuecomment-1251337365 I still can't run the tests: latest kgyrtkirk/hive-dev-box:executor docker image freshly cloned hive repo from github (master branch) `mvn clean install -DskipTests -T 1C mvn clean install -pl beeline/ mvn clean install -pl beeline/ -Dtest=TestHiveCli#testDatabaseOptions ` results: `[ERROR] Failures: [ERROR] TestHiveCli.testDatabaseOptions:107->verifyCMD:270 The expected keyword "testtbl" occur in the output: [ERROR] TestHiveCli.testSetHeaderValue:89->verifyCMD:270 The expected keyword "testtbl.a testtbl.b" occur in the output: [ERROR] TestHiveCli.testSourceCmd:114->verifyCMD:270 The expected keyword "sc1" occur in the output: [ERROR] TestHiveCli.testSourceCmd2:122->verifyCMD:270 The expected keyword "sc3" occur in the output: [ERROR] TestHiveCli.testSourceCmd4:138->verifyCMD:270 The expected keyword "testtbl" occur in the output: [ERROR] TestHiveCli.testSqlFromCmdWithDBName:166->verifyCMD:270 The expected keyword "testtbl" occur in the output:` and `[ERROR] Failures: [ERROR] TestHiveCli.setup:295->initFromFile:321->executeCMD:256 Supported return code is 0 while the actual is 2` Issue Time Tracking --- Worklog Id: (was: 810128) Time Spent: 3h 50m (was: 3h 40m) > Upgrade to JLine3 > - > > Key: HIVE-25495 > URL: https://issues.apache.org/jira/browse/HIVE-25495 > Project: Hive > Issue Type: Improvement >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Labels: pull-request-available > Time Spent: 3h 50m > Remaining Estimate: 0h > > Jline 2 has been discontinued a long while ago. Hadoop uses JLine3 so Hive > should match. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-25495) Upgrade to JLine3
[ https://issues.apache.org/jira/browse/HIVE-25495?focusedWorklogId=810110&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810110 ] ASF GitHub Bot logged work on HIVE-25495: - Author: ASF GitHub Bot Created on: 19/Sep/22 17:02 Start Date: 19/Sep/22 17:02 Worklog Time Spent: 10m Work Description: LA-Toth commented on PR #3069: URL: https://github.com/apache/hive/pull/3069#issuecomment-1251296736 I had one and only one issue: I was unable to run the tests to have a usable result. These are unstable even in @kgyrtkirk's docker image that is behind the jenkins AFAIK. So I was unable to decide if the current PR woks or not. Issue Time Tracking --- Worklog Id: (was: 810110) Time Spent: 3h 40m (was: 3.5h) > Upgrade to JLine3 > - > > Key: HIVE-25495 > URL: https://issues.apache.org/jira/browse/HIVE-25495 > Project: Hive > Issue Type: Improvement >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > > Jline 2 has been discontinued a long while ago. Hadoop uses JLine3 so Hive > should match. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26536) Enable 'hive.acid.truncate.usebase' by default
[ https://issues.apache.org/jira/browse/HIVE-26536?focusedWorklogId=810087&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810087 ] ASF GitHub Bot logged work on HIVE-26536: - Author: ASF GitHub Bot Created on: 19/Sep/22 16:01 Start Date: 19/Sep/22 16:01 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3598: URL: https://github.com/apache/hive/pull/3598#issuecomment-1251221555 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3598) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3598&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3598&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3598&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=CODE_SMELL) [9 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3598&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3598&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810087) Time Spent: 50m (was: 40m) > Enable 'hive.acid.truncate.usebase' by default > -- > > Key: HIVE-26536 > URL: https://issues.apache.org/jira/browse/HIVE-26536 > Project: Hive > Issue Type: Improvement >Reporter: Sourabh Badhya >Assignee: Sourabh Badhya >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > The config 'hive.metastore.acid.truncate.usebase' was disabled due to > HIVE-25050 and subsequent patches in master have renamed it to > 'hive.acid.truncate.usebase'. However, since the necessary fixes required for > this config are already present in the current master branch, we can enable > it by default. Hence the scope of this Jira will be to enable this in the > master branch only so that eventual releases can benefit from this feature. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26539) Prevent unsafe deserialization in PartitionExpressionForMetastore
[ https://issues.apache.org/jira/browse/HIVE-26539?focusedWorklogId=810084&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810084 ] ASF GitHub Bot logged work on HIVE-26539: - Author: ASF GitHub Bot Created on: 19/Sep/22 15:47 Start Date: 19/Sep/22 15:47 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3605: URL: https://github.com/apache/hive/pull/3605#issuecomment-1251205290 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3605) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3605&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3605&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3605&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=CODE_SMELL) [16 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3605&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3605&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810084) Time Spent: 0.5h (was: 20m) > Prevent unsafe deserialization in PartitionExpressionForMetastore > - > > Key: HIVE-26539 > URL: https://issues.apache.org/jira/browse/HIVE-26539 > Project: Hive > Issue Type: Improvement >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26496) FetchOperator scans delete_delta folders multiple times causing slowness
[ https://issues.apache.org/jira/browse/HIVE-26496?focusedWorklogId=810080&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810080 ] ASF GitHub Bot logged work on HIVE-26496: - Author: ASF GitHub Bot Created on: 19/Sep/22 15:44 Start Date: 19/Sep/22 15:44 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3606: URL: https://github.com/apache/hive/pull/3606#issuecomment-1251200951 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3606) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3606&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3606&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3606&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3606&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3606&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3606&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3606&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3606&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3606&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3606&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3606&resolved=false&types=CODE_SMELL) [8 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3606&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3606&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3606&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810080) Time Spent: 8h 50m (was: 8h 40m) > FetchOperator scans delete_delta folders multiple times causing slowness > > > Key: HIVE-26496 > URL: https://issues.apache.org/jira/browse/HIVE-26496 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Rajesh Balamohan >Assignee: Dmitriy Fingerman >Priority: Major > Labels: pull-request-available > Time Spent: 8h 50m > Remaining Estimate: 0h > > FetchOperator scans way too many number of files/directories than needed. > For e.g here is a layout of a table which had set of updates and deletes. > There are set of "delta" and "delete_delta" folders which are created. > {noformat} > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/base_001 > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_002_002_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_003_003_
[jira] [Work logged] (HIVE-26420) Configurable timeout for HiveSplitGenerator to wait for LLAP instances
[ https://issues.apache.org/jira/browse/HIVE-26420?focusedWorklogId=810075&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810075 ] ASF GitHub Bot logged work on HIVE-26420: - Author: ASF GitHub Bot Created on: 19/Sep/22 15:20 Start Date: 19/Sep/22 15:20 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3468: URL: https://github.com/apache/hive/pull/3468#issuecomment-1251170933 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3468) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3468&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3468&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3468&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3468&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3468&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3468&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3468&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3468&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3468&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3468&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3468&resolved=false&types=CODE_SMELL) [14 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3468&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3468&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3468&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810075) Time Spent: 50m (was: 40m) > Configurable timeout for HiveSplitGenerator to wait for LLAP instances > -- > > Key: HIVE-26420 > URL: https://issues.apache.org/jira/browse/HIVE-26420 > Project: Hive > Issue Type: Improvement >Reporter: László Bodor >Assignee: László Bodor >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > In some circumstances we cannot guarantee that LLAP daemons are ready as soon > as Tez AMs, but don't want the query to fail immediately with: > {code} > Caused by: java.lang.IllegalArgumentException: No running LLAP daemons! > Please check LLAP service status and zookeeper configuration > com.google.common.base.Preconditions.checkArgument(Preconditions.java:142) > > org.apache.hadoop.hive.ql.exec.tez.Utils.getCustomSplitLocationProvider(Utils.java:105) > > org.apache.hadoop.hive.ql.exec.tez.Utils.getSplitLocationProvider(Utils.java:77) > > org.apache.
[jira] [Updated] (HIVE-26541) WebHCatServer start fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stamatis Zampetakis updated HIVE-26541: --- Description: The TestWebHCatE2e test fails due to the NPE shown below. {noformat} templeton: Server failed to start: null [main] ERROR org.apache.hive.hcatalog.templeton.Main - Server failed to start: java.lang.NullPointerException at org.eclipse.jetty.server.AbstractConnector.(AbstractConnector.java:174) at org.eclipse.jetty.server.AbstractNetworkConnector.(AbstractNetworkConnector.java:44) at org.eclipse.jetty.server.ServerConnector.(ServerConnector.java:220) at org.eclipse.jetty.server.ServerConnector.(ServerConnector.java:143) at org.apache.hive.hcatalog.templeton.Main.createChannelConnector(Main.java:295) at org.apache.hive.hcatalog.templeton.Main.runServer(Main.java:252) at org.apache.hive.hcatalog.templeton.Main.run(Main.java:147) at org.apache.hive.hcatalog.templeton.TestWebHCatE2e.startHebHcatInMem(TestWebHCatE2e.java:94) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) {noformat} > WebHCatServer start fails with NPE > -- > > Key: HIVE-26541 > URL: https://issues.apache.org/jira/browse/HIVE-26541 > Project: Hive > Issue Type: Sub-task >Reporter: Zhiguo Wu >Priority: Major > > The TestWebHCatE2e test fails due to the NPE shown below. > {noformat} > templeton: Server failed to start: null > [main] ERROR org.apache.hive.hcatalog.templeton.Main - Server failed to > start: > java.lang.NullPointerException > at > org.eclipse.jetty.server.AbstractConnector.(AbstractConnector.java:174) > at > org.eclipse.jetty.server.AbstractNetworkConnector.(AbstractNetworkConnector.java:44) > at org.eclipse.jetty.server.ServerConnector.(ServerConnector.java:220) > at org.eclipse.jetty.server.ServerConnector.(ServerConnector.java:143) > at > org.apache.hive.hcatalog.templeton.Main.createChannelConnector(Main.java:295) > at org.apache.hive.hcatalog.templeton.Main.runServer(Main.java:252) > at org.apache.hive.hcatalog.templeton.Main.run(Main.java:147) > at > org.apache.hive.hcatalog.templeton.TestWebHCatE2e.startHebHcatInMem(TestWebHCatE2e.java:94) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26541) WebHCatServer start fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stamatis Zampetakis updated HIVE-26541: --- Component/s: HCatalog > WebHCatServer start fails with NPE > -- > > Key: HIVE-26541 > URL: https://issues.apache.org/jira/browse/HIVE-26541 > Project: Hive > Issue Type: Sub-task > Components: HCatalog >Reporter: Zhiguo Wu >Priority: Major > > The TestWebHCatE2e test fails due to the NPE shown below. > {noformat} > templeton: Server failed to start: null > [main] ERROR org.apache.hive.hcatalog.templeton.Main - Server failed to > start: > java.lang.NullPointerException > at > org.eclipse.jetty.server.AbstractConnector.(AbstractConnector.java:174) > at > org.eclipse.jetty.server.AbstractNetworkConnector.(AbstractNetworkConnector.java:44) > at org.eclipse.jetty.server.ServerConnector.(ServerConnector.java:220) > at org.eclipse.jetty.server.ServerConnector.(ServerConnector.java:143) > at > org.apache.hive.hcatalog.templeton.Main.createChannelConnector(Main.java:295) > at org.apache.hive.hcatalog.templeton.Main.runServer(Main.java:252) > at org.apache.hive.hcatalog.templeton.Main.run(Main.java:147) > at > org.apache.hive.hcatalog.templeton.TestWebHCatE2e.startHebHcatInMem(TestWebHCatE2e.java:94) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26496) FetchOperator scans delete_delta folders multiple times causing slowness
[ https://issues.apache.org/jira/browse/HIVE-26496?focusedWorklogId=810040&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810040 ] ASF GitHub Bot logged work on HIVE-26496: - Author: ASF GitHub Bot Created on: 19/Sep/22 13:56 Start Date: 19/Sep/22 13:56 Worklog Time Spent: 10m Work Description: difin opened a new pull request, #3606: URL: https://github.com/apache/hive/pull/3606 …ethod. ### What changes were proposed in this pull request? ### Why are the changes needed? ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? Issue Time Tracking --- Worklog Id: (was: 810040) Time Spent: 8h 40m (was: 8.5h) > FetchOperator scans delete_delta folders multiple times causing slowness > > > Key: HIVE-26496 > URL: https://issues.apache.org/jira/browse/HIVE-26496 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Rajesh Balamohan >Assignee: Dmitriy Fingerman >Priority: Major > Labels: pull-request-available > Time Spent: 8h 40m > Remaining Estimate: 0h > > FetchOperator scans way too many number of files/directories than needed. > For e.g here is a layout of a table which had set of updates and deletes. > There are set of "delta" and "delete_delta" folders which are created. > {noformat} > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/base_001 > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_002_002_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_003_003_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_004_004_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_005_005_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_006_006_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_007_007_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_008_008_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_009_009_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_010_010_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_011_011_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_012_012_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_013_013_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_014_014_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_015_015_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_016_016_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_017_017_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_018_018_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_019_019_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_020_020_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_021_021_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delete_delta_022_022_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_002_002_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_003_003_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_004_004_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_005_005_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_006_006_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_007_007_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_008_008_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_009_009_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_010_010_ > s3a://bucket-name/warehouse/tablespace/managed/hive/test.db/date_dim/delta_011_011_ > s3a://bucket-name/warehouse/tablespace/managed/
[jira] [Work logged] (HIVE-26345) SQLOperation class output real exception message to jdbc client
[ https://issues.apache.org/jira/browse/HIVE-26345?focusedWorklogId=810035&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810035 ] ASF GitHub Bot logged work on HIVE-26345: - Author: ASF GitHub Bot Created on: 19/Sep/22 13:29 Start Date: 19/Sep/22 13:29 Worklog Time Spent: 10m Work Description: zhangbutao commented on code in PR #3393: URL: https://github.com/apache/hive/pull/3393#discussion_r974248833 ## ql/src/java/org/apache/hadoop/hive/ql/Driver.java: ## @@ -511,13 +511,17 @@ public void compile(String command, boolean resetTaskIds, boolean deferClose) th } private void prepareForCompile(boolean resetTaskIds) throws CommandProcessorException { -driverTxnHandler.createTxnManager(); -DriverState.setDriverState(driverState); -prepareContext(); -setQueryId(); +try { Review Comment: @zabetak Maybe I didn't describe it exactly. In fact, I tried to fix two related issues in this PR. 1. First: Beeline client can not get real exception message which is caused by the change(https://issues.apache.org/jira/browse/HIVE-23124). 2. Second: Even though the first issus is fixed, Beeline client can not get valid exception state and code in the following test: `set hive.support.concurrency=false;` `set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;` `create table testacid(id int) stored as orc tblproperties('transactional'='true');` _**Error: Error running query: java.lang.RuntimeException: To use DbTxnManager you must set hive.support.concurrency=true (state=,code=0)**_ However, In hive3, we can get valiad exception with state=42000, code=10264 **_Error: Error while compiling statement: FAILED: RuntimeException [Error 10264]: To use DbTxnManager you must set hive.support.concurrency=true (state=42000,code=10264)_** This change was introduced by https://issues.apache.org/jira/browse/HIVE-22526, which missed handling the exception from `driverTxnHandler#createTxnManager`, i also think we should fix this. Issue Time Tracking --- Worklog Id: (was: 810035) Time Spent: 1h 10m (was: 1h) > SQLOperation class output real exception message to jdbc client > --- > > Key: HIVE-26345 > URL: https://issues.apache.org/jira/browse/HIVE-26345 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 4.0.0-alpha-2 >Reporter: zhangbutao >Assignee: zhangbutao >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > Beeline client may can not get real exception from _*SQLOperation class*_ and > user may don't how to fix query based on client exception massage. > Step to repro: > {code:java} > set hive.support.concurrency=false; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create table testacid(id int) stored as orc > tblproperties('transactional'='true');{code} > Beeline console output exception: > {code:java} > Error: Error running query (state=,code=0) {code} > > However, Hive3 beeline can output readable exception information: > {code:java} > Error: Error while compiling statement: FAILED: RuntimeException [Error > 10264]: To use DbTxnManager you must set hive.support.concurrency=true > (state=42000,code=10264) {code} > > This change was introduced by HIVE-23124, i think we should fix this to > output real exception to prompt users to amend the query. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-25848) Empty result for structs in point lookup optimization with vectorization on
[ https://issues.apache.org/jira/browse/HIVE-25848?focusedWorklogId=810024&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810024 ] ASF GitHub Bot logged work on HIVE-25848: - Author: ASF GitHub Bot Created on: 19/Sep/22 12:41 Start Date: 19/Sep/22 12:41 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3592: URL: https://github.com/apache/hive/pull/3592#issuecomment-1250968545 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3592) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3592&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3592&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3592&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=CODE_SMELL) [9 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3592&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3592&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3592&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 810024) Time Spent: 1.5h (was: 1h 20m) > Empty result for structs in point lookup optimization with vectorization on > --- > > Key: HIVE-25848 > URL: https://issues.apache.org/jira/browse/HIVE-25848 > Project: Hive > Issue Type: Bug >Reporter: Ádám Szita >Assignee: Hankó Gergely >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > Repro steps: > {code:java} > set hive.fetch.task.conversion=none; > create table test (a string) partitioned by (y string, m string); > insert into test values ('aa', 2022, 1); > select * from test where (y=year(date_sub(current_date,4)) and > m=month(date_sub(current_date,4))) or (y=year(date_sub(current_date,10)) and > m=month(date_sub(current_date,10)) ); > --gives empty result{code} > Turning either of the feature below off yields to good result (1 row > expected): > {code:java} > set hive.optimize.point.lookup=false; > set hive.cbo.enable=false; > set hive.vec
[jira] [Work logged] (HIVE-26345) SQLOperation class output real exception message to jdbc client
[ https://issues.apache.org/jira/browse/HIVE-26345?focusedWorklogId=810015&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810015 ] ASF GitHub Bot logged work on HIVE-26345: - Author: ASF GitHub Bot Created on: 19/Sep/22 12:23 Start Date: 19/Sep/22 12:23 Worklog Time Spent: 10m Work Description: zabetak commented on code in PR #3393: URL: https://github.com/apache/hive/pull/3393#discussion_r974185170 ## ql/src/java/org/apache/hadoop/hive/ql/Driver.java: ## @@ -511,13 +511,17 @@ public void compile(String command, boolean resetTaskIds, boolean deferClose) th } private void prepareForCompile(boolean resetTaskIds) throws CommandProcessorException { -driverTxnHandler.createTxnManager(); -DriverState.setDriverState(driverState); -prepareContext(); -setQueryId(); +try { Review Comment: I suppose that this try-catch block was introduced to address the [so-called regression](https://github.com/apache/hive/pull/3393#issuecomment-1207815099) from HIVE-22526 but I don't fully understand where the regression comes from. I had a quick look in the diff introduced by HIVE-22526 but as far as I can see there the`driverTxnHandler#createTxnManager` is not wrapped in a try-catch block and it seems that the exception is not passed in `handleException` method. @zhangbutao can you explain a bit more on where exactly is the regression? Issue Time Tracking --- Worklog Id: (was: 810015) Time Spent: 1h (was: 50m) > SQLOperation class output real exception message to jdbc client > --- > > Key: HIVE-26345 > URL: https://issues.apache.org/jira/browse/HIVE-26345 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 4.0.0-alpha-2 >Reporter: zhangbutao >Assignee: zhangbutao >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Beeline client may can not get real exception from _*SQLOperation class*_ and > user may don't how to fix query based on client exception massage. > Step to repro: > {code:java} > set hive.support.concurrency=false; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create table testacid(id int) stored as orc > tblproperties('transactional'='true');{code} > Beeline console output exception: > {code:java} > Error: Error running query (state=,code=0) {code} > > However, Hive3 beeline can output readable exception information: > {code:java} > Error: Error while compiling statement: FAILED: RuntimeException [Error > 10264]: To use DbTxnManager you must set hive.support.concurrency=true > (state=42000,code=10264) {code} > > This change was introduced by HIVE-23124, i think we should fix this to > output real exception to prompt users to amend the query. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26504) User is not able to drop table
[ https://issues.apache.org/jira/browse/HIVE-26504?focusedWorklogId=810009&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-810009 ] ASF GitHub Bot logged work on HIVE-26504: - Author: ASF GitHub Bot Created on: 19/Sep/22 12:06 Start Date: 19/Sep/22 12:06 Worklog Time Spent: 10m Work Description: deniskuzZ merged PR #3557: URL: https://github.com/apache/hive/pull/3557 Issue Time Tracking --- Worklog Id: (was: 810009) Time Spent: 2h 50m (was: 2h 40m) > User is not able to drop table > -- > > Key: HIVE-26504 > URL: https://issues.apache.org/jira/browse/HIVE-26504 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: László Végh >Assignee: László Végh >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > Hive won't store anything in *TAB_COL_STATS* for partitioned table, whereas > impala stores complete column stats in TAB_COL_STATS for partitioned table. > Deleting entries in TAB_COL_STATS is based on (DB_NAME, TABLE_NAME), not by > TBL_ID. Renamed tables were having old names in TAB_COL_STATS. > To Repro: > {code:java} > beeline: > set hive.create.as.insert.only=false; > set hive.create.as.acid=false; > create table testes.table_name_with_partition (id tinyint, name string) > partitioned by (col_to_partition bigint) stored as parquet; > insert into testes.table_name_with_partition (id, name, col_to_partition) > values (1, "a", 2020), (2, "b", 2021), (3, "c", 2022); > impala: > compute stats testes.table_name_with_partition; -- backend shows new entries > in TAB_COL_STATS > beeline: > alter table testes.table_name_with_partition rename to > testes2.table_that_cant_be_droped; > drop table testes2.table_that_cant_be_droped; -- This fails with > TAB_COL_STATS_fkey constraint violation. > {code} > Exception trace for drop table failure > {code:java} > Caused by: org.postgresql.util.PSQLException: ERROR: update or delete on > table "TBLS" violates foreign key constraint "TAB_COL_STATS_fkey" on table > "TAB_COL_STATS" > Detail: Key (TBL_ID)=(19816) is still referenced from table "TAB_COL_STATS". > at > org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2532) > at > org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2267) > ... 50 more > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26545) Error communicating with the metastore
[ https://issues.apache.org/jira/browse/HIVE-26545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zengxl updated HIVE-26545: -- Description: * All my clients and servers are set as follows {code:java} hive.support.concurrency true hive.txn.manager org.apache.hadoop.hive.ql.lockmgr.DbTxnManager hive.compactor.initiator.on true {code} * Throw exceptions frequently {code:java} 2022-09-19T02:27:46,331 INFO [394a5f2b-b657-4654-837b-482fa2bf947d HiveServer2-Handler-Pool: Thread-452]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-452 2022-09-19T02:27:50,256 ERROR [HiveServer2-Background-Pool: Thread-7336]: ql.Driver (:()) - FAILED: Error in acquiring locks: Error communicating with the metastore org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the metastore at org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:178) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:607) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:623) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:276) at org.apache.hadoop.hive.ql.lockmgr.HiveTxnManagerImpl.acquireLocks(HiveTxnManagerImpl.java:76) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:93) at org.apache.hadoop.hive.ql.Driver.acquireLocks(Driver.java:1610) at org.apache.hadoop.hive.ql.Driver.lockAndRespond(Driver.java:1795) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1965) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.thrift.TApplicationException: Internal error processing lock at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:5299) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:5286) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2568) at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) at com.sun.proxy.$Proxy40.lock(Unknown Source) at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) at com.sun.proxy.$Proxy40.lock(Unknown Source) at org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:103) ... 23 more {code} was: * All my clients and servers are set as follows {code:java} hive.support.concurrency false hive.txn.manager org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager hive.compactor.initiator.on false {code} * However, the following information is found on the Hiveserver2 server 2022-09-19T00:10:02,222 INFO [09e6b2b3-4c12-40e8-98ed-417b2c790be9 HiveServer2-Handler-Pool: Thread-92]: metastore.HiveMetaStoreClient (:()) - Mestastore configuration hive.txn.m
[jira] [Updated] (HIVE-26545) Error communicating with the metastore
[ https://issues.apache.org/jira/browse/HIVE-26545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zengxl updated HIVE-26545: -- Component/s: Hive (was: HiveServer2) > Error communicating with the metastore > -- > > Key: HIVE-26545 > URL: https://issues.apache.org/jira/browse/HIVE-26545 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.2 >Reporter: zengxl >Priority: Major > > * All my clients and servers are set as follows > {code:java} > > hive.support.concurrency > false > > > hive.txn.manager > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager > > > hive.compactor.initiator.on > false > {code} > * However, the following information is found on the Hiveserver2 server > 2022-09-19T00:10:02,222 INFO [09e6b2b3-4c12-40e8-98ed-417b2c790be9 > HiveServer2-Handler-Pool: Thread-92]: metastore.HiveMetaStoreClient (:()) - > Mestastore configuration hive.txn.manager changed from > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager to > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager > * Why would you want to change this configuration? > ** The following exception occurs due to the configuration change > {code:java} > 2022-09-19T02:27:46,331 INFO [394a5f2b-b657-4654-837b-482fa2bf947d > HiveServer2-Handler-Pool: Thread-452]: session.SessionState (:()) - Resetting > thread name to HiveServer2-Handler-Pool: Thread-452 > 2022-09-19T02:27:50,256 ERROR [HiveServer2-Background-Pool: Thread-7336]: > ql.Driver (:()) - FAILED: Error in acquiring locks: Error communicating with > the metastore > org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the > metastore > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:178) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:607) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:623) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:276) > at > org.apache.hadoop.hive.ql.lockmgr.HiveTxnManagerImpl.acquireLocks(HiveTxnManagerImpl.java:76) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:93) > at org.apache.hadoop.hive.ql.Driver.acquireLocks(Driver.java:1610) > at org.apache.hadoop.hive.ql.Driver.lockAndRespond(Driver.java:1795) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1965) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.thrift.TApplicationException: Internal error processing > lock > at > org.apache.thrift.TApplicationException.read(TApplicationException.java:111) > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:5299) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:5286) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2568) > at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) > at com.sun.proxy.$
[jira] [Updated] (HIVE-26545) Error communicating with the metastore
[ https://issues.apache.org/jira/browse/HIVE-26545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zengxl updated HIVE-26545: -- Issue Type: Bug (was: Test) > Error communicating with the metastore > -- > > Key: HIVE-26545 > URL: https://issues.apache.org/jira/browse/HIVE-26545 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: zengxl >Priority: Trivial > > * All my clients and servers are set as follows > {code:java} > > hive.support.concurrency > false > > > hive.txn.manager > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager > > > hive.compactor.initiator.on > false > {code} > * However, the following information is found on the Hiveserver2 server > 2022-09-19T00:10:02,222 INFO [09e6b2b3-4c12-40e8-98ed-417b2c790be9 > HiveServer2-Handler-Pool: Thread-92]: metastore.HiveMetaStoreClient (:()) - > Mestastore configuration hive.txn.manager changed from > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager to > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager > * Why would you want to change this configuration? > ** The following exception occurs due to the configuration change > {code:java} > 2022-09-19T02:27:46,331 INFO [394a5f2b-b657-4654-837b-482fa2bf947d > HiveServer2-Handler-Pool: Thread-452]: session.SessionState (:()) - Resetting > thread name to HiveServer2-Handler-Pool: Thread-452 > 2022-09-19T02:27:50,256 ERROR [HiveServer2-Background-Pool: Thread-7336]: > ql.Driver (:()) - FAILED: Error in acquiring locks: Error communicating with > the metastore > org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the > metastore > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:178) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:607) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:623) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:276) > at > org.apache.hadoop.hive.ql.lockmgr.HiveTxnManagerImpl.acquireLocks(HiveTxnManagerImpl.java:76) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:93) > at org.apache.hadoop.hive.ql.Driver.acquireLocks(Driver.java:1610) > at org.apache.hadoop.hive.ql.Driver.lockAndRespond(Driver.java:1795) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1965) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.thrift.TApplicationException: Internal error processing > lock > at > org.apache.thrift.TApplicationException.read(TApplicationException.java:111) > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:5299) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:5286) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2568) > at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) > at com.sun.proxy.$Proxy40.lock(Unknown
[jira] [Updated] (HIVE-26545) Error communicating with the metastore
[ https://issues.apache.org/jira/browse/HIVE-26545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zengxl updated HIVE-26545: -- Summary: Error communicating with the metastore (was: hive.txn.manager changed) > Error communicating with the metastore > -- > > Key: HIVE-26545 > URL: https://issues.apache.org/jira/browse/HIVE-26545 > Project: Hive > Issue Type: Test > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: zengxl >Priority: Trivial > > * All my clients and servers are set as follows > {code:java} > > hive.support.concurrency > false > > > hive.txn.manager > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager > > > hive.compactor.initiator.on > false > {code} > * However, the following information is found on the Hiveserver2 server > 2022-09-19T00:10:02,222 INFO [09e6b2b3-4c12-40e8-98ed-417b2c790be9 > HiveServer2-Handler-Pool: Thread-92]: metastore.HiveMetaStoreClient (:()) - > Mestastore configuration hive.txn.manager changed from > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager to > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager > * Why would you want to change this configuration? > ** The following exception occurs due to the configuration change > {code:java} > 2022-09-19T02:27:46,331 INFO [394a5f2b-b657-4654-837b-482fa2bf947d > HiveServer2-Handler-Pool: Thread-452]: session.SessionState (:()) - Resetting > thread name to HiveServer2-Handler-Pool: Thread-452 > 2022-09-19T02:27:50,256 ERROR [HiveServer2-Background-Pool: Thread-7336]: > ql.Driver (:()) - FAILED: Error in acquiring locks: Error communicating with > the metastore > org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the > metastore > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:178) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:607) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:623) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:276) > at > org.apache.hadoop.hive.ql.lockmgr.HiveTxnManagerImpl.acquireLocks(HiveTxnManagerImpl.java:76) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:93) > at org.apache.hadoop.hive.ql.Driver.acquireLocks(Driver.java:1610) > at org.apache.hadoop.hive.ql.Driver.lockAndRespond(Driver.java:1795) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1965) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.thrift.TApplicationException: Internal error processing > lock > at > org.apache.thrift.TApplicationException.read(TApplicationException.java:111) > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:5299) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:5286) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2568) > at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java
[jira] [Updated] (HIVE-26545) Error communicating with the metastore
[ https://issues.apache.org/jira/browse/HIVE-26545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zengxl updated HIVE-26545: -- Priority: Major (was: Trivial) > Error communicating with the metastore > -- > > Key: HIVE-26545 > URL: https://issues.apache.org/jira/browse/HIVE-26545 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: zengxl >Priority: Major > > * All my clients and servers are set as follows > {code:java} > > hive.support.concurrency > false > > > hive.txn.manager > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager > > > hive.compactor.initiator.on > false > {code} > * However, the following information is found on the Hiveserver2 server > 2022-09-19T00:10:02,222 INFO [09e6b2b3-4c12-40e8-98ed-417b2c790be9 > HiveServer2-Handler-Pool: Thread-92]: metastore.HiveMetaStoreClient (:()) - > Mestastore configuration hive.txn.manager changed from > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager to > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager > * Why would you want to change this configuration? > ** The following exception occurs due to the configuration change > {code:java} > 2022-09-19T02:27:46,331 INFO [394a5f2b-b657-4654-837b-482fa2bf947d > HiveServer2-Handler-Pool: Thread-452]: session.SessionState (:()) - Resetting > thread name to HiveServer2-Handler-Pool: Thread-452 > 2022-09-19T02:27:50,256 ERROR [HiveServer2-Background-Pool: Thread-7336]: > ql.Driver (:()) - FAILED: Error in acquiring locks: Error communicating with > the metastore > org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the > metastore > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:178) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:607) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:623) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:276) > at > org.apache.hadoop.hive.ql.lockmgr.HiveTxnManagerImpl.acquireLocks(HiveTxnManagerImpl.java:76) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:93) > at org.apache.hadoop.hive.ql.Driver.acquireLocks(Driver.java:1610) > at org.apache.hadoop.hive.ql.Driver.lockAndRespond(Driver.java:1795) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1965) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.thrift.TApplicationException: Internal error processing > lock > at > org.apache.thrift.TApplicationException.read(TApplicationException.java:111) > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:5299) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:5286) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2568) > at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) > at com.sun.proxy.$Proxy40.lock(Unknow
[jira] (HIVE-25790) Make managed table copies handle updates (FileUtils)
[ https://issues.apache.org/jira/browse/HIVE-25790 ] Teddy Choi deleted comment on HIVE-25790: --- was (Author: teddy.choi): I created a pull request. Its third commit is running on the upstream Jenkins. > Make managed table copies handle updates (FileUtils) > > > Key: HIVE-25790 > URL: https://issues.apache.org/jira/browse/HIVE-25790 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-25790) Make managed table copies handle updates (FileUtils)
[ https://issues.apache.org/jira/browse/HIVE-25790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-25790: -- Status: Patch Available (was: In Progress) I created a pull request. Its third commit is running on the upstream Jenkins. > Make managed table copies handle updates (FileUtils) > > > Key: HIVE-25790 > URL: https://issues.apache.org/jira/browse/HIVE-25790 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26545) hive.txn.manager changed
[ https://issues.apache.org/jira/browse/HIVE-26545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zengxl updated HIVE-26545: -- Issue Type: Test (was: Bug) > hive.txn.manager changed > > > Key: HIVE-26545 > URL: https://issues.apache.org/jira/browse/HIVE-26545 > Project: Hive > Issue Type: Test > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: zengxl >Priority: Trivial > > * All my clients and servers are set as follows > {code:java} > > hive.support.concurrency > false > > > hive.txn.manager > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager > > > hive.compactor.initiator.on > false > {code} > * However, the following information is found on the Hiveserver2 server > 2022-09-19T00:10:02,222 INFO [09e6b2b3-4c12-40e8-98ed-417b2c790be9 > HiveServer2-Handler-Pool: Thread-92]: metastore.HiveMetaStoreClient (:()) - > Mestastore configuration hive.txn.manager changed from > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager to > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager > * Why would you want to change this configuration? > ** The following exception occurs due to the configuration change > {code:java} > 2022-09-19T02:27:46,331 INFO [394a5f2b-b657-4654-837b-482fa2bf947d > HiveServer2-Handler-Pool: Thread-452]: session.SessionState (:()) - Resetting > thread name to HiveServer2-Handler-Pool: Thread-452 > 2022-09-19T02:27:50,256 ERROR [HiveServer2-Background-Pool: Thread-7336]: > ql.Driver (:()) - FAILED: Error in acquiring locks: Error communicating with > the metastore > org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the > metastore > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:178) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:607) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:623) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:276) > at > org.apache.hadoop.hive.ql.lockmgr.HiveTxnManagerImpl.acquireLocks(HiveTxnManagerImpl.java:76) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:93) > at org.apache.hadoop.hive.ql.Driver.acquireLocks(Driver.java:1610) > at org.apache.hadoop.hive.ql.Driver.lockAndRespond(Driver.java:1795) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1965) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.thrift.TApplicationException: Internal error processing > lock > at > org.apache.thrift.TApplicationException.read(TApplicationException.java:111) > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:5299) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:5286) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2568) > at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) > at com.sun.proxy.$Proxy40.lock(Unknown Source) > at sun.r
[jira] [Updated] (HIVE-26545) hive.txn.manager changed
[ https://issues.apache.org/jira/browse/HIVE-26545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zengxl updated HIVE-26545: -- Priority: Trivial (was: Major) > hive.txn.manager changed > > > Key: HIVE-26545 > URL: https://issues.apache.org/jira/browse/HIVE-26545 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: zengxl >Priority: Trivial > > * All my clients and servers are set as follows > {code:java} > > hive.support.concurrency > false > > > hive.txn.manager > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager > > > hive.compactor.initiator.on > false > {code} > * However, the following information is found on the Hiveserver2 server > 2022-09-19T00:10:02,222 INFO [09e6b2b3-4c12-40e8-98ed-417b2c790be9 > HiveServer2-Handler-Pool: Thread-92]: metastore.HiveMetaStoreClient (:()) - > Mestastore configuration hive.txn.manager changed from > org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager to > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager > * Why would you want to change this configuration? > ** The following exception occurs due to the configuration change > {code:java} > 2022-09-19T02:27:46,331 INFO [394a5f2b-b657-4654-837b-482fa2bf947d > HiveServer2-Handler-Pool: Thread-452]: session.SessionState (:()) - Resetting > thread name to HiveServer2-Handler-Pool: Thread-452 > 2022-09-19T02:27:50,256 ERROR [HiveServer2-Background-Pool: Thread-7336]: > ql.Driver (:()) - FAILED: Error in acquiring locks: Error communicating with > the metastore > org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the > metastore > at > org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:178) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:607) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:623) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:276) > at > org.apache.hadoop.hive.ql.lockmgr.HiveTxnManagerImpl.acquireLocks(HiveTxnManagerImpl.java:76) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:93) > at org.apache.hadoop.hive.ql.Driver.acquireLocks(Driver.java:1610) > at org.apache.hadoop.hive.ql.Driver.lockAndRespond(Driver.java:1795) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1965) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.thrift.TApplicationException: Internal error processing > lock > at > org.apache.thrift.TApplicationException.read(TApplicationException.java:111) > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:5299) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:5286) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2568) > at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) > at com.sun.proxy.$Proxy40.lock(Unknown Source) > at sun
[jira] [Work logged] (HIVE-26524) Use Calcite to remove sections of a query plan known never produces rows
[ https://issues.apache.org/jira/browse/HIVE-26524?focusedWorklogId=81&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-81 ] ASF GitHub Bot logged work on HIVE-26524: - Author: ASF GitHub Bot Created on: 19/Sep/22 11:47 Start Date: 19/Sep/22 11:47 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3588: URL: https://github.com/apache/hive/pull/3588#issuecomment-1250912746 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3588) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3588&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3588&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3588&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3588&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3588&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3588&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3588&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3588&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3588&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3588&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3588&resolved=false&types=CODE_SMELL) [16 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3588&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3588&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3588&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 81) Time Spent: 1h (was: 50m) > Use Calcite to remove sections of a query plan known never produces rows > > > Key: HIVE-26524 > URL: https://issues.apache.org/jira/browse/HIVE-26524 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Calcite has a set of rules to remove sections of a query plan known never > produces any rows. In some cases the whole plan can be removed. Such plans > are represented with a single {{Values}} operators with no tuples. ex.: > {code:java} > select y + 1 from (select a1 y, b1 z from t1 where b1 > 10) q WHERE 1=0 > {code} > {code:java} > HiveValues(tuples=[[]]) > {code} > Other cases when plan has outer join or set operators some branches can be > replaced with empty values moving forward in some cases the join/set operator > can be remov
[jira] [Work logged] (HIVE-25790) Make managed table copies handle updates (FileUtils)
[ https://issues.apache.org/jira/browse/HIVE-25790?focusedWorklogId=80&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-80 ] ASF GitHub Bot logged work on HIVE-25790: - Author: ASF GitHub Bot Created on: 19/Sep/22 11:46 Start Date: 19/Sep/22 11:46 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3582: URL: https://github.com/apache/hive/pull/3582#issuecomment-1250912289 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3582) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3582&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3582&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3582&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3582&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3582&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3582&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3582&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3582&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3582&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3582&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3582&resolved=false&types=CODE_SMELL) [10 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3582&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3582&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3582&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 80) Time Spent: 50m (was: 40m) > Make managed table copies handle updates (FileUtils) > > > Key: HIVE-25790 > URL: https://issues.apache.org/jira/browse/HIVE-25790 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-25848) Empty result for structs in point lookup optimization with vectorization on
[ https://issues.apache.org/jira/browse/HIVE-25848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hankó Gergely reassigned HIVE-25848: Assignee: Hankó Gergely > Empty result for structs in point lookup optimization with vectorization on > --- > > Key: HIVE-25848 > URL: https://issues.apache.org/jira/browse/HIVE-25848 > Project: Hive > Issue Type: Bug >Reporter: Ádám Szita >Assignee: Hankó Gergely >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Repro steps: > {code:java} > set hive.fetch.task.conversion=none; > create table test (a string) partitioned by (y string, m string); > insert into test values ('aa', 2022, 1); > select * from test where (y=year(date_sub(current_date,4)) and > m=month(date_sub(current_date,4))) or (y=year(date_sub(current_date,10)) and > m=month(date_sub(current_date,10)) ); > --gives empty result{code} > Turning either of the feature below off yields to good result (1 row > expected): > {code:java} > set hive.optimize.point.lookup=false; > set hive.cbo.enable=false; > set hive.vectorized.execution.enabled=false; > {code} > Expected good result is: > {code} > +-+-+-+ > | test.a | test.y | test.m | > +-+-+-+ > | aa | 2022 | 1 | > +-+-+-+ {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HIVE-25848) Empty result for structs in point lookup optimization with vectorization on
[ https://issues.apache.org/jira/browse/HIVE-25848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-25848 started by Hankó Gergely. > Empty result for structs in point lookup optimization with vectorization on > --- > > Key: HIVE-25848 > URL: https://issues.apache.org/jira/browse/HIVE-25848 > Project: Hive > Issue Type: Bug >Reporter: Ádám Szita >Assignee: Hankó Gergely >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Repro steps: > {code:java} > set hive.fetch.task.conversion=none; > create table test (a string) partitioned by (y string, m string); > insert into test values ('aa', 2022, 1); > select * from test where (y=year(date_sub(current_date,4)) and > m=month(date_sub(current_date,4))) or (y=year(date_sub(current_date,10)) and > m=month(date_sub(current_date,10)) ); > --gives empty result{code} > Turning either of the feature below off yields to good result (1 row > expected): > {code:java} > set hive.optimize.point.lookup=false; > set hive.cbo.enable=false; > set hive.vectorized.execution.enabled=false; > {code} > Expected good result is: > {code} > +-+-+-+ > | test.a | test.y | test.m | > +-+-+-+ > | aa | 2022 | 1 | > +-+-+-+ {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26404) HMS memory leak when compaction cleaner fails to remove obsolete files
[ https://issues.apache.org/jira/browse/HIVE-26404?focusedWorklogId=809993&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809993 ] ASF GitHub Bot logged work on HIVE-26404: - Author: ASF GitHub Bot Created on: 19/Sep/22 11:13 Start Date: 19/Sep/22 11:13 Worklog Time Spent: 10m Work Description: deniskuzZ commented on code in PR #3514: URL: https://github.com/apache/hive/pull/3514#discussion_r974131463 ## itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCleanerWithReplication.java: ## @@ -44,34 +39,25 @@ import org.junit.Test; import javax.security.auth.login.LoginException; -import java.io.File; import java.io.IOException; -import java.nio.file.Files; import static org.junit.Assert.assertEquals; public class TestCleanerWithReplication extends CompactorTest { private Path cmRootDirectory; - private static FileSystem fs; private static MiniDFSCluster miniDFSCluster; private final String dbName = "TestCleanerWithReplication"; @Before public void setup() throws Exception { -conf = new HiveConf(); -TestTxnDbUtil.setConfValues(conf); -TestTxnDbUtil.cleanDb(conf); -conf.set("fs.defaultFS", fs.getUri().toString()); +HiveConf conf = new HiveConf(); +conf.set("fs.defaultFS", miniDFSCluster.getFileSystem().getUri().toString()); conf.setBoolVar(HiveConf.ConfVars.REPLCMENABLED, true); -MetastoreConf.setBoolVar(conf, MetastoreConf.ConfVars.COMPACTOR_INITIATOR_ON, true); -TestTxnDbUtil.prepDb(conf); -ms = new HiveMetaStoreClient(conf); -txnHandler = TxnUtils.getTxnStore(conf); +super.setup(conf); Review Comment: minor: no need for super keyword here, setup(conf) is only defined in parent class Issue Time Tracking --- Worklog Id: (was: 809993) Time Spent: 50m (was: 40m) > HMS memory leak when compaction cleaner fails to remove obsolete files > -- > > Key: HIVE-26404 > URL: https://issues.apache.org/jira/browse/HIVE-26404 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 4.0.0-alpha-1 >Reporter: Stamatis Zampetakis >Assignee: Stamatis Zampetakis >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > While investigating an issue where HMS becomes unresponsive we noticed a lot > of failed attempts from the compaction Cleaner thread to remove obsolete > directories with exceptions similar to the one below. > {noformat} > 2022-06-16 05:48:24,819 ERROR > org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: > Caught exception when cleaning, unable to complete cleaning of > id:4410976,dbname:my_database,tableName:my_table,partName:day=20220502,state:,type:MAJOR,enqueueTime:0,start:0,properties:null,runAs:some_user,tooManyAborts:false,hasOldAbort:false,highestWriteId:187502,errorMessage:null > java.io.IOException: Not enough history available for (187502,x). Oldest > available base: > hdfs://nameservice1/warehouse/tablespace/managed/hive/my_database.db/my_table/day=20220502/base_0188687_v4297872 > at > org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:1432) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner.removeFiles(Cleaner.java:261) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner.access$000(Cleaner.java:71) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner$1.run(Cleaner.java:203) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner.clean(Cleaner.java:200) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner.lambda$run$0(Cleaner.java:105) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorUtil$ThrowingRunnable.lambda$unchecked$0(CompactorUtil.java:54) > at > java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > In addition the logs contained a large number of long JVM pauses as shown > below and the HMS (RSZ) memory kept increasing at rate of 90MB per hour. > {noformat} > 2022-06-16 16:17:17,805 WARN > org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor: > [org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor$Monitor@5b022296]: > Detec
[jira] [Work logged] (HIVE-26404) HMS memory leak when compaction cleaner fails to remove obsolete files
[ https://issues.apache.org/jira/browse/HIVE-26404?focusedWorklogId=809991&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809991 ] ASF GitHub Bot logged work on HIVE-26404: - Author: ASF GitHub Bot Created on: 19/Sep/22 11:08 Start Date: 19/Sep/22 11:08 Worklog Time Spent: 10m Work Description: deniskuzZ commented on code in PR #3514: URL: https://github.com/apache/hive/pull/3514#discussion_r974126899 ## ql/src/test/org/apache/hadoop/hive/ql/txn/compactor/CompactorTest.java: ## @@ -181,7 +184,7 @@ protected Table newTable(String dbName, String tableName, boolean partitioned, table.setTableName(tableName); table.setDbName(dbName); table.setOwner("me"); -table.setSd(newStorageDescriptor(getLocation(tableName, null), sortCols)); +table.setSd(newStorageDescriptor(getLocation(tableName, null).toString(), sortCols)); Review Comment: minor: getLocation returning String was more natural here Issue Time Tracking --- Worklog Id: (was: 809991) Time Spent: 40m (was: 0.5h) > HMS memory leak when compaction cleaner fails to remove obsolete files > -- > > Key: HIVE-26404 > URL: https://issues.apache.org/jira/browse/HIVE-26404 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 4.0.0-alpha-1 >Reporter: Stamatis Zampetakis >Assignee: Stamatis Zampetakis >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > While investigating an issue where HMS becomes unresponsive we noticed a lot > of failed attempts from the compaction Cleaner thread to remove obsolete > directories with exceptions similar to the one below. > {noformat} > 2022-06-16 05:48:24,819 ERROR > org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: > Caught exception when cleaning, unable to complete cleaning of > id:4410976,dbname:my_database,tableName:my_table,partName:day=20220502,state:,type:MAJOR,enqueueTime:0,start:0,properties:null,runAs:some_user,tooManyAborts:false,hasOldAbort:false,highestWriteId:187502,errorMessage:null > java.io.IOException: Not enough history available for (187502,x). Oldest > available base: > hdfs://nameservice1/warehouse/tablespace/managed/hive/my_database.db/my_table/day=20220502/base_0188687_v4297872 > at > org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:1432) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner.removeFiles(Cleaner.java:261) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner.access$000(Cleaner.java:71) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner$1.run(Cleaner.java:203) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner.clean(Cleaner.java:200) > at > org.apache.hadoop.hive.ql.txn.compactor.Cleaner.lambda$run$0(Cleaner.java:105) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorUtil$ThrowingRunnable.lambda$unchecked$0(CompactorUtil.java:54) > at > java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > In addition the logs contained a large number of long JVM pauses as shown > below and the HMS (RSZ) memory kept increasing at rate of 90MB per hour. > {noformat} > 2022-06-16 16:17:17,805 WARN > org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor: > [org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor$Monitor@5b022296]: > Detected pause in JVM or host machine (eg GC): pause of approximately 34346ms > 2022-06-16 16:17:21,497 INFO > org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor: > [org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor$Monitor@5b022296]: > Detected pause in JVM or host machine (eg GC): pause of approximately 1690ms > 2022-06-16 16:17:57,696 WARN > org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor: > [org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor$Monitor@5b022296]: > Detected pause in JVM or host machine (eg GC): pause of approximately 34697ms > 2022-06-16 16:18:01,326 INFO > org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor: > [org.apache.hadoop.hive.metastore.metrics.JvmPauseMonitor$Monitor@5b022296]: > Detected pause in JVM or host mach
[jira] [Commented] (HIVE-25790) Make managed table copies handle updates (FileUtils)
[ https://issues.apache.org/jira/browse/HIVE-25790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606546#comment-17606546 ] Teddy Choi commented on HIVE-25790: --- I made a pull request. There's its third commit running on Jenkins. It copies the only different files from a source path to a destination path. For existing directories and files, it skips full copies but updates modification times to represent that it's updated. It's most optimized for HDFS-HDFS replication scenarios with checksum, block size, and length comparisons. > Make managed table copies handle updates (FileUtils) > > > Key: HIVE-25790 > URL: https://issues.apache.org/jira/browse/HIVE-25790 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HIVE-26543) Improve TxnHandler logging
[ https://issues.apache.org/jira/browse/HIVE-26543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-26543 started by László Bodor. --- > Improve TxnHandler logging > -- > > Key: HIVE-26543 > URL: https://issues.apache.org/jira/browse/HIVE-26543 > Project: Hive > Issue Type: Improvement >Reporter: László Bodor >Assignee: László Bodor >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > TxnHandler has some bad logging, like: > {code} > LOG.debug("Going to execute query<" + txnsQuery + ">"); > {code} > https://github.com/apache/hive/blob/8e39937bdb577bc135579d7d34b46ba2d788ca53/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L533 > this will involve a pretty unnecessary string concatenation in production > when we're on INFO level usually, let's use string formats -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26536) Enable 'hive.acid.truncate.usebase' by default
[ https://issues.apache.org/jira/browse/HIVE-26536?focusedWorklogId=809990&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809990 ] ASF GitHub Bot logged work on HIVE-26536: - Author: ASF GitHub Bot Created on: 19/Sep/22 10:56 Start Date: 19/Sep/22 10:56 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3598: URL: https://github.com/apache/hive/pull/3598#issuecomment-1250866914 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3598) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3598&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3598&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3598&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=CODE_SMELL) [6 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3598&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3598&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 809990) Time Spent: 40m (was: 0.5h) > Enable 'hive.acid.truncate.usebase' by default > -- > > Key: HIVE-26536 > URL: https://issues.apache.org/jira/browse/HIVE-26536 > Project: Hive > Issue Type: Improvement >Reporter: Sourabh Badhya >Assignee: Sourabh Badhya >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > The config 'hive.metastore.acid.truncate.usebase' was disabled due to > HIVE-25050 and subsequent patches in master have renamed it to > 'hive.acid.truncate.usebase'. However, since the necessary fixes required for > this config are already present in the current master branch, we can enable > it by default. Hence the scope of this Jira will be to enable this in the > master branch only so that eventual releases can benefit from this feature. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26539) Prevent unsafe deserialization in PartitionExpressionForMetastore
[ https://issues.apache.org/jira/browse/HIVE-26539?focusedWorklogId=809986&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809986 ] ASF GitHub Bot logged work on HIVE-26539: - Author: ASF GitHub Bot Created on: 19/Sep/22 10:15 Start Date: 19/Sep/22 10:15 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3605: URL: https://github.com/apache/hive/pull/3605#issuecomment-1250829083 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3605) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3605&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3605&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3605&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=CODE_SMELL) [13 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3605&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3605&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3605&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 809986) Time Spent: 20m (was: 10m) > Prevent unsafe deserialization in PartitionExpressionForMetastore > - > > Key: HIVE-26539 > URL: https://issues.apache.org/jira/browse/HIVE-26539 > Project: Hive > Issue Type: Improvement >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-26538) MetastoreDefaultTransformer should revise the location when it's empty
[ https://issues.apache.org/jira/browse/HIVE-26538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606521#comment-17606521 ] Zhihua Deng commented on HIVE-26538: When running hms benchmarks, there is an exception: {code:java} %d [%thread] %-5level %logger - %msg%n java.lang.RuntimeException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string) at org.apache.hadoop.hive.metastore.tools.Util.throwingSupplierWrapper(Util.java:91) ~[hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.HMSBenchmarks.lambda$benchmarkRenameTable$57(HMSBenchmarks.java:366) ~[hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.MicroBenchmark.measure(MicroBenchmark.java:92) ~[hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.MicroBenchmark.measure(MicroBenchmark.java:121) ~[hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.HMSBenchmarks.benchmarkRenameTable(HMSBenchmarks.java:363) ~[hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.BenchmarkTool.lambda$runNonAcidBenchmarks$17(BenchmarkTool.java:266) ~[hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.BenchmarkSuite.lambda$runAll$0(BenchmarkSuite.java:126) ~[hmsbench-jar-with-dependencies.jar:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_292] at org.apache.hadoop.hive.metastore.tools.BenchmarkSuite.runAll(BenchmarkSuite.java:124) ~[hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.BenchmarkSuite.runMatching(BenchmarkSuite.java:158) ~[hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.BenchmarkTool.runNonAcidBenchmarks(BenchmarkTool.java:319) [hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.BenchmarkTool.run(BenchmarkTool.java:189) [hmsbench-jar-with-dependencies.jar:?] at picocli.CommandLine.execute(CommandLine.java:833) [hmsbench-jar-with-dependencies.jar:?] at picocli.CommandLine.access$700(CommandLine.java:111) [hmsbench-jar-with-dependencies.jar:?] at picocli.CommandLine$RunLast.handle(CommandLine.java:1010) [hmsbench-jar-with-dependencies.jar:?] at picocli.CommandLine$RunLast.handle(CommandLine.java:978) [hmsbench-jar-with-dependencies.jar:?] at picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:886) [hmsbench-jar-with-dependencies.jar:?] at picocli.CommandLine.parseWithHandlers(CommandLine.java:1169) [hmsbench-jar-with-dependencies.jar:?] at picocli.CommandLine.run(CommandLine.java:1506) [hmsbench-jar-with-dependencies.jar:?] at picocli.CommandLine.run(CommandLine.java:1440) [hmsbench-jar-with-dependencies.jar:?] at org.apache.hadoop.hive.metastore.tools.BenchmarkTool.main(BenchmarkTool.java:149) [hmsbench-jar-with-dependencies.jar:?] {code} > MetastoreDefaultTransformer should revise the location when it's empty > -- > > Key: HIVE-26538 > URL: https://issues.apache.org/jira/browse/HIVE-26538 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > The table's location is treated as null when it's empty, this takes place > somewhere such as: > [https://github.com/apache/hive/blob/82f319773cb2361a98963e861fb903ab8eecd9c4/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java#L2367] > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetastoreDefaultTransformer.java#L729] > > MetastoreDefaultTransformer should revise the empty location when > altering/creating tables. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26543) Improve TxnHandler logging
[ https://issues.apache.org/jira/browse/HIVE-26543?focusedWorklogId=809979&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809979 ] ASF GitHub Bot logged work on HIVE-26543: - Author: ASF GitHub Bot Created on: 19/Sep/22 09:31 Start Date: 19/Sep/22 09:31 Worklog Time Spent: 10m Work Description: abstractdog commented on PR #3603: URL: https://github.com/apache/hive/pull/3603#issuecomment-1250787201 > There are similar improvements that can be done in https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java and https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnUtils.java . Can we include that in this PR as well. The rest of the files in the txn folder seem to be good. yeah, that's right, I'm doing the same there too, thanks @achennagiri Issue Time Tracking --- Worklog Id: (was: 809979) Time Spent: 0.5h (was: 20m) > Improve TxnHandler logging > -- > > Key: HIVE-26543 > URL: https://issues.apache.org/jira/browse/HIVE-26543 > Project: Hive > Issue Type: Improvement >Reporter: László Bodor >Assignee: László Bodor >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > TxnHandler has some bad logging, like: > {code} > LOG.debug("Going to execute query<" + txnsQuery + ">"); > {code} > https://github.com/apache/hive/blob/8e39937bdb577bc135579d7d34b46ba2d788ca53/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L533 > this will involve a pretty unnecessary string concatenation in production > when we're on INFO level usually, let's use string formats -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-26538) MetastoreDefaultTransformer should revise the location when it's empty
[ https://issues.apache.org/jira/browse/HIVE-26538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhihua Deng reassigned HIVE-26538: -- Assignee: Zhihua Deng > MetastoreDefaultTransformer should revise the location when it's empty > -- > > Key: HIVE-26538 > URL: https://issues.apache.org/jira/browse/HIVE-26538 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > The table's location is treated as null when it's empty, this takes place > somewhere such as: > [https://github.com/apache/hive/blob/82f319773cb2361a98963e861fb903ab8eecd9c4/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java#L2367] > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetastoreDefaultTransformer.java#L729] > > MetastoreDefaultTransformer should revise the empty location when > altering/creating tables. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26539) Prevent unsafe deserialization in PartitionExpressionForMetastore
[ https://issues.apache.org/jira/browse/HIVE-26539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-26539: -- Labels: pull-request-available (was: ) > Prevent unsafe deserialization in PartitionExpressionForMetastore > - > > Key: HIVE-26539 > URL: https://issues.apache.org/jira/browse/HIVE-26539 > Project: Hive > Issue Type: Improvement >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26539) Prevent unsafe deserialization in PartitionExpressionForMetastore
[ https://issues.apache.org/jira/browse/HIVE-26539?focusedWorklogId=809976&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809976 ] ASF GitHub Bot logged work on HIVE-26539: - Author: ASF GitHub Bot Created on: 19/Sep/22 09:20 Start Date: 19/Sep/22 09:20 Worklog Time Spent: 10m Work Description: dengzhhu653 opened a new pull request, #3605: URL: https://github.com/apache/hive/pull/3605 …etastore ### What changes were proposed in this pull request? ### Why are the changes needed? ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? TestMetastoreExpr/TestSerializationUtilities Issue Time Tracking --- Worklog Id: (was: 809976) Remaining Estimate: 0h Time Spent: 10m > Prevent unsafe deserialization in PartitionExpressionForMetastore > - > > Key: HIVE-26539 > URL: https://issues.apache.org/jira/browse/HIVE-26539 > Project: Hive > Issue Type: Improvement >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26504) User is not able to drop table
[ https://issues.apache.org/jira/browse/HIVE-26504?focusedWorklogId=809973&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809973 ] ASF GitHub Bot logged work on HIVE-26504: - Author: ASF GitHub Bot Created on: 19/Sep/22 08:53 Start Date: 19/Sep/22 08:53 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3557: URL: https://github.com/apache/hive/pull/3557#issuecomment-1250743768 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3557) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3557&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3557&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3557&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3557&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3557&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3557&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3557&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3557&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3557&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3557&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3557&resolved=false&types=CODE_SMELL) [9 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3557&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3557&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3557&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 809973) Time Spent: 2h 40m (was: 2.5h) > User is not able to drop table > -- > > Key: HIVE-26504 > URL: https://issues.apache.org/jira/browse/HIVE-26504 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: László Végh >Assignee: László Végh >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > Hive won't store anything in *TAB_COL_STATS* for partitioned table, whereas > impala stores complete column stats in TAB_COL_STATS for partitioned table. > Deleting entries in TAB_COL_STATS is based on (DB_NAME, TABLE_NAME), not by > TBL_ID. Renamed tables were having old names in TAB_COL_STATS. > To Repro: > {code:java} > beeline: > set hive.create.as.insert.only=false; > set hive.create.as.acid=false; > create table testes.table_name_with_partition (id tinyint, name string) > partitioned by (col_to_partition bigint) stored as parquet; > insert into testes.table_name_with_partition (id, name, col_to_partition) > values (1
[jira] [Work logged] (HIVE-26504) User is not able to drop table
[ https://issues.apache.org/jira/browse/HIVE-26504?focusedWorklogId=809964&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809964 ] ASF GitHub Bot logged work on HIVE-26504: - Author: ASF GitHub Bot Created on: 19/Sep/22 08:02 Start Date: 19/Sep/22 08:02 Worklog Time Spent: 10m Work Description: veghlaci05 commented on code in PR #3557: URL: https://github.com/apache/hive/pull/3557#discussion_r973964881 ## standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java: ## @@ -1112,6 +1081,32 @@ public static List alterTableUpdateTableColumnStats(RawStore m return newMultiColStats; } + @VisibleForTesting + public static void updateTableColumnStats(RawStore msdb, Table newTable, String validWriteIds, List columnStatistics) Review Comment: According to our offline conversation, I file a new Jira for creating a new stats related utility class, and move all realted methods there. For now only the static modifier is removed. ## standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java: ## @@ -1011,96 +1015,61 @@ private Path constructRenamedPath(Path defaultNewPath, Path currentPath) { defaultNewPath.toUri().getPath()); } - @VisibleForTesting - public static List alterTableUpdateTableColumnStats(RawStore msdb, Table oldTable, Table newTable, - EnvironmentContext ec, String validWriteIds, Configuration conf, List deletedCols) - throws MetaException, InvalidObjectException { -String catName = normalizeIdentifier(oldTable.isSetCatName() ? oldTable.getCatName() : -getDefaultCatalog(conf)); + public static List getColumnStats(RawStore msdb, Table oldTable) Review Comment: According to our offline conversation, I file a new Jira for creating a new stats related utility class, and move all realted methods there. Issue Time Tracking --- Worklog Id: (was: 809964) Time Spent: 2.5h (was: 2h 20m) > User is not able to drop table > -- > > Key: HIVE-26504 > URL: https://issues.apache.org/jira/browse/HIVE-26504 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: László Végh >Assignee: László Végh >Priority: Major > Labels: pull-request-available > Time Spent: 2.5h > Remaining Estimate: 0h > > Hive won't store anything in *TAB_COL_STATS* for partitioned table, whereas > impala stores complete column stats in TAB_COL_STATS for partitioned table. > Deleting entries in TAB_COL_STATS is based on (DB_NAME, TABLE_NAME), not by > TBL_ID. Renamed tables were having old names in TAB_COL_STATS. > To Repro: > {code:java} > beeline: > set hive.create.as.insert.only=false; > set hive.create.as.acid=false; > create table testes.table_name_with_partition (id tinyint, name string) > partitioned by (col_to_partition bigint) stored as parquet; > insert into testes.table_name_with_partition (id, name, col_to_partition) > values (1, "a", 2020), (2, "b", 2021), (3, "c", 2022); > impala: > compute stats testes.table_name_with_partition; -- backend shows new entries > in TAB_COL_STATS > beeline: > alter table testes.table_name_with_partition rename to > testes2.table_that_cant_be_droped; > drop table testes2.table_that_cant_be_droped; -- This fails with > TAB_COL_STATS_fkey constraint violation. > {code} > Exception trace for drop table failure > {code:java} > Caused by: org.postgresql.util.PSQLException: ERROR: update or delete on > table "TBLS" violates foreign key constraint "TAB_COL_STATS_fkey" on table > "TAB_COL_STATS" > Detail: Key (TBL_ID)=(19816) is still referenced from table "TAB_COL_STATS". > at > org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2532) > at > org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2267) > ... 50 more > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26471) New metric for Compaction pooling
[ https://issues.apache.org/jira/browse/HIVE-26471?focusedWorklogId=809957&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809957 ] ASF GitHub Bot logged work on HIVE-26471: - Author: ASF GitHub Bot Created on: 19/Sep/22 07:45 Start Date: 19/Sep/22 07:45 Worklog Time Spent: 10m Work Description: lcspinter merged PR #3521: URL: https://github.com/apache/hive/pull/3521 Issue Time Tracking --- Worklog Id: (was: 809957) Time Spent: 2.5h (was: 2h 20m) > New metric for Compaction pooling > - > > Key: HIVE-26471 > URL: https://issues.apache.org/jira/browse/HIVE-26471 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: László Végh >Assignee: László Végh >Priority: Major > Labels: pull-request-available > Time Spent: 2.5h > Remaining Estimate: 0h > > To be able to properly supervise the pool based compaction, a new metric is > required: > Number of 'Initiated' compaction requests per compaction pool. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26504) User is not able to drop table
[ https://issues.apache.org/jira/browse/HIVE-26504?focusedWorklogId=809953&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809953 ] ASF GitHub Bot logged work on HIVE-26504: - Author: ASF GitHub Bot Created on: 19/Sep/22 07:09 Start Date: 19/Sep/22 07:09 Worklog Time Spent: 10m Work Description: veghlaci05 commented on code in PR #3557: URL: https://github.com/apache/hive/pull/3557#discussion_r973924216 ## standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java: ## @@ -226,16 +229,21 @@ private static ColumnStatistics updateStatsForAlterPart(RawStore rawStore, Table private static void updateStatsForAlterTable(RawStore rawStore, Table tblBefore, Table tblAfter, String catalogName, String dbName, String tableName) throws Exception { ColumnStatistics colStats = null; -List deletedCols = new ArrayList<>(); if (tblBefore.isSetPartitionKeys()) { List parts = sharedCache.listCachedPartitions(catalogName, dbName, tableName, -1); for (Partition part : parts) { colStats = updateStatsForAlterPart(rawStore, tblBefore, catalogName, dbName, tableName, part); } } -List multiColumnStats = HiveAlterHandler -.alterTableUpdateTableColumnStats(rawStore, tblBefore, tblAfter, null, null, rawStore.getConf(), deletedCols); +rawStore.alterTable(catalogName, dbName, tblBefore.getTableName(), tblAfter, null); + +Set deletedCols = new HashSet<>(); +List multiColumnStats = HiveAlterHandler.getColumnStats(rawStore, tblBefore); +multiColumnStats.forEach(cs -> + deletedCols.addAll(HiveAlterHandler.filterColumnStatsForTableColumns(tblBefore.getSd().getCols(), cs) Review Comment: The `deletedCols.addAll()` call is inside a foreach, so simple assignment is not possible. And yes, it was part of the `alterTableUpdateTableColumnStats`. There was a kind of "dry run" mode in which no changes were made, only the deletedColumns list was filled. I found that approach a bit clunky, as it made the code hard to read by adding a lot of extra if-else statements. So I decided to extract the filtering logic into a separate method which can be called both from here and from `HiveAlterHandler` Issue Time Tracking --- Worklog Id: (was: 809953) Time Spent: 2h 20m (was: 2h 10m) > User is not able to drop table > -- > > Key: HIVE-26504 > URL: https://issues.apache.org/jira/browse/HIVE-26504 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: László Végh >Assignee: László Végh >Priority: Major > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > Hive won't store anything in *TAB_COL_STATS* for partitioned table, whereas > impala stores complete column stats in TAB_COL_STATS for partitioned table. > Deleting entries in TAB_COL_STATS is based on (DB_NAME, TABLE_NAME), not by > TBL_ID. Renamed tables were having old names in TAB_COL_STATS. > To Repro: > {code:java} > beeline: > set hive.create.as.insert.only=false; > set hive.create.as.acid=false; > create table testes.table_name_with_partition (id tinyint, name string) > partitioned by (col_to_partition bigint) stored as parquet; > insert into testes.table_name_with_partition (id, name, col_to_partition) > values (1, "a", 2020), (2, "b", 2021), (3, "c", 2022); > impala: > compute stats testes.table_name_with_partition; -- backend shows new entries > in TAB_COL_STATS > beeline: > alter table testes.table_name_with_partition rename to > testes2.table_that_cant_be_droped; > drop table testes2.table_that_cant_be_droped; -- This fails with > TAB_COL_STATS_fkey constraint violation. > {code} > Exception trace for drop table failure > {code:java} > Caused by: org.postgresql.util.PSQLException: ERROR: update or delete on > table "TBLS" violates foreign key constraint "TAB_COL_STATS_fkey" on table > "TAB_COL_STATS" > Detail: Key (TBL_ID)=(19816) is still referenced from table "TAB_COL_STATS". > at > org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2532) > at > org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2267) > ... 50 more > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work logged] (HIVE-26536) Enable 'hive.acid.truncate.usebase' by default
[ https://issues.apache.org/jira/browse/HIVE-26536?focusedWorklogId=809951&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-809951 ] ASF GitHub Bot logged work on HIVE-26536: - Author: ASF GitHub Bot Created on: 19/Sep/22 07:03 Start Date: 19/Sep/22 07:03 Worklog Time Spent: 10m Work Description: sonarcloud[bot] commented on PR #3598: URL: https://github.com/apache/hive/pull/3598#issuecomment-1250642284 Kudos, SonarCloud Quality Gate passed! [![Quality Gate passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png 'Quality Gate passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3598) [![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png 'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=BUG) [![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png 'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=BUG) [![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png 'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=VULNERABILITY) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=VULNERABILITY) [![Security Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png 'Security Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3598&resolved=false&types=SECURITY_HOTSPOT) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3598&resolved=false&types=SECURITY_HOTSPOT) [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3598&resolved=false&types=SECURITY_HOTSPOT) [![Code Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png 'Code Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=CODE_SMELL) [![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png 'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=CODE_SMELL) [6 Code Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3598&resolved=false&types=CODE_SMELL) [![No Coverage information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png 'No Coverage information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3598&metric=coverage&view=list) No Coverage information [![No Duplication information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png 'No Duplication information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3598&metric=duplicated_lines_density&view=list) No Duplication information Issue Time Tracking --- Worklog Id: (was: 809951) Time Spent: 0.5h (was: 20m) > Enable 'hive.acid.truncate.usebase' by default > -- > > Key: HIVE-26536 > URL: https://issues.apache.org/jira/browse/HIVE-26536 > Project: Hive > Issue Type: Improvement >Reporter: Sourabh Badhya >Assignee: Sourabh Badhya >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > The config 'hive.metastore.acid.truncate.usebase' was disabled due to > HIVE-25050 and subsequent patches in master have renamed it to > 'hive.acid.truncate.usebase'. However, since the necessary fixes required for > this config are already present in the current master branch, we can enable > it by default. Hence the scope of this Jira will be to enable this in the > master branch only so that eventual releases can benefit from this feature. -- This message was sent by Atlassian Jira (v8.20.10#820010)