[jira] [Work logged] (HIVE-26035) Explore moving to directsql for ObjectStore::addPartitions

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26035?focusedWorklogId=842183&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842183
 ]

ASF GitHub Bot logged work on HIVE-26035:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 07:34
Start Date: 30/Jan/23 07:34
Worklog Time Spent: 10m 
  Work Description: VenuReddy2103 commented on code in PR #3905:
URL: https://github.com/apache/hive/pull/3905#discussion_r1090257691


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlInsertPart.java:
##
@@ -0,0 +1,835 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.metastore;
+
+import static org.apache.commons.lang3.StringUtils.repeat;
+import static org.apache.hadoop.hive.metastore.Batchable.NO_BATCHING;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import javax.jdo.PersistenceManager;
+
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.model.MColumnDescriptor;
+import org.apache.hadoop.hive.metastore.model.MFieldSchema;
+import org.apache.hadoop.hive.metastore.model.MOrder;
+import org.apache.hadoop.hive.metastore.model.MPartition;
+import org.apache.hadoop.hive.metastore.model.MPartitionColumnPrivilege;
+import org.apache.hadoop.hive.metastore.model.MPartitionPrivilege;
+import org.apache.hadoop.hive.metastore.model.MSerDeInfo;
+import org.apache.hadoop.hive.metastore.model.MStorageDescriptor;
+import org.apache.hadoop.hive.metastore.model.MStringList;
+import org.datanucleus.ExecutionContext;
+import org.datanucleus.api.jdo.JDOPersistenceManager;
+import org.datanucleus.identity.DatastoreId;
+import org.datanucleus.metadata.AbstractClassMetaData;
+import org.datanucleus.metadata.IdentityType;
+
+/**
+ * This class contains the methods to insert into tables on the underlying 
database using direct SQL
+ */
+class DirectSqlInsertPart {
+  private final PersistenceManager pm;
+  private final DatabaseProduct dbType;
+  private final int batchSize;
+
+  public DirectSqlInsertPart(PersistenceManager pm, DatabaseProduct dbType, 
int batchSize) {
+this.pm = pm;
+this.dbType = dbType;
+this.batchSize = batchSize;
+  }
+
+  /**
+   * Interface to execute multiple row insert query in batch for direct SQL
+   */
+  interface BatchExecutionContext {
+void execute(String batchQueryText, int batchRowCount, int 
batchParamCount) throws MetaException;
+  }
+
+  private Long getDataStoreId(Class modelClass) throws MetaException {
+ExecutionContext ec = ((JDOPersistenceManager) pm).getExecutionContext();
+AbstractClassMetaData cmd = 
ec.getMetaDataManager().getMetaDataForClass(modelClass, 
ec.getClassLoaderResolver());
+if (cmd.getIdentityType() == IdentityType.DATASTORE) {
+  return (Long) ec.getStoreManager().getValueGenerationStrategyValue(ec, 
cmd, -1);
+} else {
+  throw new MetaException("Identity type is not datastore.");
+}
+  }
+
+  private void insertInBatch(String tableName, String columns, int 
columnCount, int rowCount,
+  BatchExecutionContext bec) throws MetaException {

Review Comment:
   done





Issue Time Tracking
---

Worklog Id: (was: 842183)
Time Spent: 4h 50m  (was: 4h 40m)

> Explore moving to directsql for ObjectStore::addPartitions
> --
>
> Key: HIVE-26035
> URL: https://issues.apache.org/jira/browse/HIVE-26035
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>Assignee: Venugopal Reddy K
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Currently {{addPartitions}} uses datanuclues and is super slow for large 
> number of partitions. It will be good to move to direct sql. Lots of repeated 
> SQLs can be avoided as well (e.g SDS, SERDE, TABLE_PARAMS)



--
This message was sent by Atlassian Ji

[jira] [Work logged] (HIVE-26035) Explore moving to directsql for ObjectStore::addPartitions

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26035?focusedWorklogId=842184&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842184
 ]

ASF GitHub Bot logged work on HIVE-26035:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 07:34
Start Date: 30/Jan/23 07:34
Worklog Time Spent: 10m 
  Work Description: VenuReddy2103 commented on code in PR #3905:
URL: https://github.com/apache/hive/pull/3905#discussion_r1090257921


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlInsertPart.java:
##
@@ -0,0 +1,835 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.metastore;
+
+import static org.apache.commons.lang3.StringUtils.repeat;
+import static org.apache.hadoop.hive.metastore.Batchable.NO_BATCHING;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import javax.jdo.PersistenceManager;
+
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.model.MColumnDescriptor;
+import org.apache.hadoop.hive.metastore.model.MFieldSchema;
+import org.apache.hadoop.hive.metastore.model.MOrder;
+import org.apache.hadoop.hive.metastore.model.MPartition;
+import org.apache.hadoop.hive.metastore.model.MPartitionColumnPrivilege;
+import org.apache.hadoop.hive.metastore.model.MPartitionPrivilege;
+import org.apache.hadoop.hive.metastore.model.MSerDeInfo;
+import org.apache.hadoop.hive.metastore.model.MStorageDescriptor;
+import org.apache.hadoop.hive.metastore.model.MStringList;
+import org.datanucleus.ExecutionContext;
+import org.datanucleus.api.jdo.JDOPersistenceManager;
+import org.datanucleus.identity.DatastoreId;
+import org.datanucleus.metadata.AbstractClassMetaData;
+import org.datanucleus.metadata.IdentityType;
+
+/**
+ * This class contains the methods to insert into tables on the underlying 
database using direct SQL
+ */
+class DirectSqlInsertPart {
+  private final PersistenceManager pm;
+  private final DatabaseProduct dbType;
+  private final int batchSize;
+
+  public DirectSqlInsertPart(PersistenceManager pm, DatabaseProduct dbType, 
int batchSize) {
+this.pm = pm;
+this.dbType = dbType;
+this.batchSize = batchSize;
+  }
+
+  /**
+   * Interface to execute multiple row insert query in batch for direct SQL
+   */
+  interface BatchExecutionContext {
+void execute(String batchQueryText, int batchRowCount, int 
batchParamCount) throws MetaException;
+  }
+
+  private Long getDataStoreId(Class modelClass) throws MetaException {
+ExecutionContext ec = ((JDOPersistenceManager) pm).getExecutionContext();
+AbstractClassMetaData cmd = 
ec.getMetaDataManager().getMetaDataForClass(modelClass, 
ec.getClassLoaderResolver());
+if (cmd.getIdentityType() == IdentityType.DATASTORE) {
+  return (Long) ec.getStoreManager().getValueGenerationStrategyValue(ec, 
cmd, -1);
+} else {
+  throw new MetaException("Identity type is not datastore.");
+}
+  }
+
+  private void insertInBatch(String tableName, String columns, int 
columnCount, int rowCount,
+  BatchExecutionContext bec) throws MetaException {
+if (rowCount == 0 || columnCount == 0) {
+  return;
+}
+int maxRowsInBatch = (batchSize == NO_BATCHING) ? rowCount : batchSize;
+int maxBatches = rowCount / maxRowsInBatch;
+int last = rowCount % maxRowsInBatch;
+String rowFormat = "(" + repeat(",?", columnCount).substring(1) + ")";
+String query = "";
+if (maxBatches > 0) {
+  query = dbType.getBatchInsertQuery(tableName, columns, rowFormat, 
maxRowsInBatch);
+}
+int batchParamCount = maxRowsInBatch * columnCount;
+for (int batch = 0; batch < maxBatches; batch++) {
+  bec.execute(query, maxRowsInBatch, batchParamCount);
+}
+if (last != 0) {
+  query = dbType.getBatchInsertQuery(tableName, columns, rowFormat, last);
+  bec.execute(query, last, last * columnCount);
+}
+  }
+
+  private void insertSerdeInBatch(Map serdeIdToSerDeInfo) 
throws MetaException {
+int r

[jira] [Work logged] (HIVE-26035) Explore moving to directsql for ObjectStore::addPartitions

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26035?focusedWorklogId=842182&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842182
 ]

ASF GitHub Bot logged work on HIVE-26035:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 07:33
Start Date: 30/Jan/23 07:33
Worklog Time Spent: 10m 
  Work Description: VenuReddy2103 commented on code in PR #3905:
URL: https://github.com/apache/hive/pull/3905#discussion_r1090257405


##
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java:
##
@@ -692,6 +692,8 @@ public enum ConfVars {
 "Default transaction isolation level for identity generation."),
 
DATANUCLEUS_USE_LEGACY_VALUE_STRATEGY("datanucleus.rdbms.useLegacyNativeValueStrategy",
 "datanucleus.rdbms.useLegacyNativeValueStrategy", true, ""),
+DATANUCLEUS_QUERY_SQL_ALLOWALL("datanucleus.query.sql.allowAll", 
"datanucleus.query.sql.allowAll",
+true, "Allow insert, update and delete operations from JDO SQL"),

Review Comment:
   Done. This property is a DataNucleus extension to allow insert/update/delete 
operations from jdo sql. Have added detailed description like this now -
   `In strict JDO all SQL queries must begin with "SELECT ...", and 
consequently it is not possible to execute queries that change data. This 
DataNucleus property when set to true allows insert, update and delete 
operations from JDO SQL. Default value is true.`





Issue Time Tracking
---

Worklog Id: (was: 842182)
Time Spent: 4h 40m  (was: 4.5h)

> Explore moving to directsql for ObjectStore::addPartitions
> --
>
> Key: HIVE-26035
> URL: https://issues.apache.org/jira/browse/HIVE-26035
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>Assignee: Venugopal Reddy K
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Currently {{addPartitions}} uses datanuclues and is super slow for large 
> number of partitions. It will be good to move to direct sql. Lots of repeated 
> SQLs can be avoided as well (e.g SDS, SERDE, TABLE_PARAMS)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26035) Explore moving to directsql for ObjectStore::addPartitions

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26035?focusedWorklogId=842177&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842177
 ]

ASF GitHub Bot logged work on HIVE-26035:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 05:47
Start Date: 30/Jan/23 05:47
Worklog Time Spent: 10m 
  Work Description: VenuReddy2103 commented on code in PR #3905:
URL: https://github.com/apache/hive/pull/3905#discussion_r1090195330


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlInsertPart.java:
##
@@ -0,0 +1,835 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.metastore;
+
+import static org.apache.commons.lang3.StringUtils.repeat;
+import static org.apache.hadoop.hive.metastore.Batchable.NO_BATCHING;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import javax.jdo.PersistenceManager;
+
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.model.MColumnDescriptor;
+import org.apache.hadoop.hive.metastore.model.MFieldSchema;
+import org.apache.hadoop.hive.metastore.model.MOrder;
+import org.apache.hadoop.hive.metastore.model.MPartition;
+import org.apache.hadoop.hive.metastore.model.MPartitionColumnPrivilege;
+import org.apache.hadoop.hive.metastore.model.MPartitionPrivilege;
+import org.apache.hadoop.hive.metastore.model.MSerDeInfo;
+import org.apache.hadoop.hive.metastore.model.MStorageDescriptor;
+import org.apache.hadoop.hive.metastore.model.MStringList;
+import org.datanucleus.ExecutionContext;
+import org.datanucleus.api.jdo.JDOPersistenceManager;
+import org.datanucleus.identity.DatastoreId;
+import org.datanucleus.metadata.AbstractClassMetaData;
+import org.datanucleus.metadata.IdentityType;
+
+/**
+ * This class contains the methods to insert into tables on the underlying 
database using direct SQL
+ */
+class DirectSqlInsertPart {
+  private final PersistenceManager pm;
+  private final DatabaseProduct dbType;
+  private final int batchSize;
+
+  public DirectSqlInsertPart(PersistenceManager pm, DatabaseProduct dbType, 
int batchSize) {
+this.pm = pm;
+this.dbType = dbType;
+this.batchSize = batchSize;
+  }
+
+  /**
+   * Interface to execute multiple row insert query in batch for direct SQL
+   */
+  interface BatchExecutionContext {
+void execute(String batchQueryText, int batchRowCount, int 
batchParamCount) throws MetaException;
+  }
+
+  private Long getDataStoreId(Class modelClass) throws MetaException {
+ExecutionContext ec = ((JDOPersistenceManager) pm).getExecutionContext();
+AbstractClassMetaData cmd = 
ec.getMetaDataManager().getMetaDataForClass(modelClass, 
ec.getClassLoaderResolver());
+if (cmd.getIdentityType() == IdentityType.DATASTORE) {
+  return (Long) ec.getStoreManager().getValueGenerationStrategyValue(ec, 
cmd, -1);
+} else {
+  throw new MetaException("Identity type is not datastore.");
+}
+  }
+
+  private void insertInBatch(String tableName, String columns, int 
columnCount, int rowCount,
+  BatchExecutionContext bec) throws MetaException {
+if (rowCount == 0 || columnCount == 0) {
+  return;
+}
+int maxRowsInBatch = (batchSize == NO_BATCHING) ? rowCount : batchSize;
+int maxBatches = rowCount / maxRowsInBatch;
+int last = rowCount % maxRowsInBatch;
+String rowFormat = "(" + repeat(",?", columnCount).substring(1) + ")";
+String query = "";
+if (maxBatches > 0) {
+  query = dbType.getBatchInsertQuery(tableName, columns, rowFormat, 
maxRowsInBatch);
+}
+int batchParamCount = maxRowsInBatch * columnCount;
+for (int batch = 0; batch < maxBatches; batch++) {
+  bec.execute(query, maxRowsInBatch, batchParamCount);

Review Comment:
   Its actually loop invariant. Thats why intialized before entering the loop 
instead of relying on compiler optimization. what is your opinion ?





Issue Time Tracking
---

Worklog Id: (was: 842177)
Time Spent: 4.5h  (wa

[jira] [Work logged] (HIVE-26601) Fix NPE encountered in second load cycle of optimised bootstrap

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26601?focusedWorklogId=842176&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842176
 ]

ASF GitHub Bot logged work on HIVE-26601:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 05:30
Start Date: 30/Jan/23 05:30
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3992:
URL: https://github.com/apache/hive/pull/3992#issuecomment-1408024179

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3992)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3992&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3992&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3992&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3992&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3992&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3992&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3992&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3992&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3992&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3992&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3992&resolved=false&types=CODE_SMELL)
 [1 Code 
Smell](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3992&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3992&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3992&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 842176)
Time Spent: 0.5h  (was: 20m)

> Fix NPE encountered in second load cycle of optimised bootstrap 
> 
>
> Key: HIVE-26601
> URL: https://issues.apache.org/jira/browse/HIVE-26601
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Vinit Patni
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> After creating reverse replication policy  after failover is completed from 
> Primary to DR cluster and DR takes over. First dump and load cycle of 
> optimised bootstrap is completing successfully, Second dump cycle on DR is 
> also completed which does selective bootstrap of tables that it read from 
> table_diff directory. However we observed issue with Second load cycle on 
> Primary Cluster side which is failing with following exception logs that 
> needs to be fixed.
> {code:java}
> [Scheduled Query Executor(schedule:repl_vinreverse, execution_id:421)]: 
> Exception while logging metric

[jira] [Work logged] (HIVE-26035) Explore moving to directsql for ObjectStore::addPartitions

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26035?focusedWorklogId=842175&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842175
 ]

ASF GitHub Bot logged work on HIVE-26035:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 05:16
Start Date: 30/Jan/23 05:16
Worklog Time Spent: 10m 
  Work Description: VenuReddy2103 commented on code in PR #3905:
URL: https://github.com/apache/hive/pull/3905#discussion_r1090181697


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlInsertPart.java:
##
@@ -0,0 +1,835 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.metastore;
+
+import static org.apache.commons.lang3.StringUtils.repeat;
+import static org.apache.hadoop.hive.metastore.Batchable.NO_BATCHING;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import javax.jdo.PersistenceManager;
+
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.model.MColumnDescriptor;
+import org.apache.hadoop.hive.metastore.model.MFieldSchema;
+import org.apache.hadoop.hive.metastore.model.MOrder;
+import org.apache.hadoop.hive.metastore.model.MPartition;
+import org.apache.hadoop.hive.metastore.model.MPartitionColumnPrivilege;
+import org.apache.hadoop.hive.metastore.model.MPartitionPrivilege;
+import org.apache.hadoop.hive.metastore.model.MSerDeInfo;
+import org.apache.hadoop.hive.metastore.model.MStorageDescriptor;
+import org.apache.hadoop.hive.metastore.model.MStringList;
+import org.datanucleus.ExecutionContext;
+import org.datanucleus.api.jdo.JDOPersistenceManager;
+import org.datanucleus.identity.DatastoreId;
+import org.datanucleus.metadata.AbstractClassMetaData;
+import org.datanucleus.metadata.IdentityType;
+
+/**
+ * This class contains the methods to insert into tables on the underlying 
database using direct SQL
+ */
+class DirectSqlInsertPart {
+  private final PersistenceManager pm;
+  private final DatabaseProduct dbType;
+  private final int batchSize;
+
+  public DirectSqlInsertPart(PersistenceManager pm, DatabaseProduct dbType, 
int batchSize) {
+this.pm = pm;
+this.dbType = dbType;
+this.batchSize = batchSize;
+  }
+
+  /**
+   * Interface to execute multiple row insert query in batch for direct SQL
+   */
+  interface BatchExecutionContext {
+void execute(String batchQueryText, int batchRowCount, int 
batchParamCount) throws MetaException;
+  }
+
+  private Long getDataStoreId(Class modelClass) throws MetaException {

Review Comment:
   Have already added java docs for public methods. Didn't add for private 
method.





Issue Time Tracking
---

Worklog Id: (was: 842175)
Time Spent: 4h 20m  (was: 4h 10m)

> Explore moving to directsql for ObjectStore::addPartitions
> --
>
> Key: HIVE-26035
> URL: https://issues.apache.org/jira/browse/HIVE-26035
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>Assignee: Venugopal Reddy K
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Currently {{addPartitions}} uses datanuclues and is super slow for large 
> number of partitions. It will be good to move to direct sql. Lots of repeated 
> SQLs can be avoided as well (e.g SDS, SERDE, TABLE_PARAMS)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26035) Explore moving to directsql for ObjectStore::addPartitions

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26035?focusedWorklogId=842173&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842173
 ]

ASF GitHub Bot logged work on HIVE-26035:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 05:13
Start Date: 30/Jan/23 05:13
Worklog Time Spent: 10m 
  Work Description: VenuReddy2103 commented on code in PR #3905:
URL: https://github.com/apache/hive/pull/3905#discussion_r1090180809


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java:
##
@@ -2606,39 +2606,69 @@ public boolean addPartitions(String catName, String 
dbName, String tblName, List
 tabGrants = this.listAllTableGrants(catName, dbName, tblName);
 tabColumnGrants = this.listTableAllColumnGrants(catName, dbName, 
tblName);
   }
-  List toPersist = new ArrayList<>();
+  List mParts = new ArrayList<>();
+  List> mPartPrivilegesList = new ArrayList<>();
+  List> mPartColPrivilegesList = new 
ArrayList<>();
   for (Partition part : parts) {
 if (!part.getTableName().equals(tblName) || 
!part.getDbName().equals(dbName)) {
   throw new MetaException("Partition does not belong to target table "
   + dbName + "." + tblName + ": " + part);
 }
 MPartition mpart = convertToMPart(part, table, true);
-
-toPersist.add(mpart);
+mParts.add(mpart);
 int now = (int) (System.currentTimeMillis() / 1000);
+List mPartPrivileges = new ArrayList<>();
 if (tabGrants != null) {
   for (MTablePrivilege tab: tabGrants) {
-toPersist.add(new MPartitionPrivilege(tab.getPrincipalName(),
-tab.getPrincipalType(), mpart, tab.getPrivilege(), now,
-tab.getGrantor(), tab.getGrantorType(), tab.getGrantOption(),
-tab.getAuthorizer()));
+MPartitionPrivilege mPartPrivilege = new 
MPartitionPrivilege(tab.getPrincipalName(), tab.getPrincipalType(),
+mpart, tab.getPrivilege(), now, tab.getGrantor(), 
tab.getGrantorType(), tab.getGrantOption(),
+tab.getAuthorizer());
+mPartPrivileges.add(mPartPrivilege);
   }
 }
 
+List mPartColumnPrivileges = new 
ArrayList<>();
 if (tabColumnGrants != null) {
   for (MTableColumnPrivilege col : tabColumnGrants) {
-toPersist.add(new MPartitionColumnPrivilege(col.getPrincipalName(),
-col.getPrincipalType(), mpart, col.getColumnName(), 
col.getPrivilege(),
-now, col.getGrantor(), col.getGrantorType(), 
col.getGrantOption(),
-col.getAuthorizer()));
+MPartitionColumnPrivilege mPartColumnPrivilege = new 
MPartitionColumnPrivilege(col.getPrincipalName(),
+col.getPrincipalType(), mpart, col.getColumnName(), 
col.getPrivilege(), now, col.getGrantor(),
+col.getGrantorType(), col.getGrantOption(), 
col.getAuthorizer());
+mPartColumnPrivileges.add(mPartColumnPrivilege);
   }
 }
+mPartPrivilegesList.add(mPartPrivileges);
+mPartColPrivilegesList.add(mPartColumnPrivileges);
   }
-  if (CollectionUtils.isNotEmpty(toPersist)) {
-pm.makePersistentAll(toPersist);
-pm.flush();
-  }
+  if (CollectionUtils.isNotEmpty(mParts)) {
+GetHelper helper = new GetHelper(null, null, null, true,
+true) {
+  @Override
+  protected Void getSqlResult(GetHelper ctx) throws 
MetaException {
+directSql.addPartitions(mParts, mPartPrivilegesList, 
mPartColPrivilegesList);
+return null;
+  }
+
+  @Override
+  protected Void getJdoResult(GetHelper ctx) {
+List toPersist = new ArrayList<>(mParts);
+mPartPrivilegesList.forEach(toPersist::addAll);
+mPartColPrivilegesList.forEach(toPersist::addAll);
+pm.makePersistentAll(toPersist);
+pm.flush();
+return null;

Review Comment:
   Don't have anything to return in this case. Since `GetHelper` generic 
class takes the return class type, had to pass `Void` object type instead of 
primitive `void`. So had to explicitly return `null`.





Issue Time Tracking
---

Worklog Id: (was: 842173)
Time Spent: 4h 10m  (was: 4h)

> Explore moving to directsql for ObjectStore::addPartitions
> --
>
> Key: HIVE-26035
> URL: https://issues.apache.org/jira/browse/HIVE-26035
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>Assignee: Venugopal Reddy K
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Esti

[jira] [Work logged] (HIVE-26960) Optimized bootstrap does not drop newly added tables at source.

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26960?focusedWorklogId=842167&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842167
 ]

ASF GitHub Bot logged work on HIVE-26960:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 03:56
Start Date: 30/Jan/23 03:56
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3995:
URL: https://github.com/apache/hive/pull/3995#issuecomment-1407956514

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3995)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3995&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3995&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3995&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3995&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3995&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3995&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3995&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3995&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3995&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3995&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3995&resolved=false&types=CODE_SMELL)
 [1 Code 
Smell](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3995&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3995&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3995&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 842167)
Time Spent: 20m  (was: 10m)

> Optimized bootstrap does not drop newly added tables at source.
> ---
>
> Key: HIVE-26960
> URL: https://issues.apache.org/jira/browse/HIVE-26960
> Project: Hive
>  Issue Type: Bug
>Reporter: Rakshith C
>Assignee: Rakshith C
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Scenario:
> Replication is setup from DR to PROD after failover from PROD to DR and no 
> existing tables are modified at PROD but a new table is added at PROD.
> Observations:
>  * _bootstrap directory won't be created during second cycle of optimized 
> bootstrap because existing tables were not modified.
>  * Based on this, it will not initialize list of tables to drop at PROD.
>  * This leads to the new table created at PROD not being dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26933) Cleanup dump directory for eventId which was failed in previous dump cycle

2023-01-29 Thread Harshal Patel (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harshal Patel resolved HIVE-26933.
--
Resolution: Fixed

> Cleanup dump directory for eventId which was failed in previous dump cycle
> --
>
> Key: HIVE-26933
> URL: https://issues.apache.org/jira/browse/HIVE-26933
> Project: Hive
>  Issue Type: Improvement
>Reporter: Harshal Patel
>Assignee: Harshal Patel
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> # If Incremental Dump operation failes while dumping any event id  in the 
> staging directory. Then dump directory for this event id along with file 
> _dumpmetadata  still exists in the dump location. which is getting stored in 
> _events_dump file
>  # When user triggers dump operation for this policy again, It again resumes 
> dumping from failed event id, and tries to dump it again but as that event id 
> directory already created in previous cycle, it fails with the exception
> {noformat}
> [Scheduled Query Executor(schedule:repl_policytest7, execution_id:7181)]: 
> FAILED: Execution Error, return code 4 from 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask. 
> org.apache.hadoop.fs.FileAlreadyExistsException: 
> /warehouse/tablespace/staging/policytest7/dGVzdDc=/14bcf976-662b-4237-b5bb-e7d63a1d089f/hive/137961/_dumpmetadata
>  for client 172.27.182.5 already exists
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:388)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2576)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2473)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:773)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:490)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>     at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
>     at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894){noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26960) Optimized bootstrap does not drop newly added tables at source.

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26960:
--
Labels: pull-request-available  (was: )

> Optimized bootstrap does not drop newly added tables at source.
> ---
>
> Key: HIVE-26960
> URL: https://issues.apache.org/jira/browse/HIVE-26960
> Project: Hive
>  Issue Type: Bug
>Reporter: Rakshith C
>Assignee: Rakshith C
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Scenario:
> Replication is setup from DR to PROD after failover from PROD to DR and no 
> existing tables are modified at PROD but a new table is added at PROD.
> Observations:
>  * _bootstrap directory won't be created during second cycle of optimized 
> bootstrap because existing tables were not modified.
>  * Based on this, it will not initialize list of tables to drop at PROD.
>  * This leads to the new table created at PROD not being dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26960) Optimized bootstrap does not drop newly added tables at source.

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26960?focusedWorklogId=842166&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842166
 ]

ASF GitHub Bot logged work on HIVE-26960:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 02:55
Start Date: 30/Jan/23 02:55
Worklog Time Spent: 10m 
  Work Description: Rakshith606 opened a new pull request, #3995:
URL: https://github.com/apache/hive/pull/3995

   
   
   ### What changes were proposed in this pull request?
   
   Fixed a bug in optimized bootstrap.
   
   ### Why are the changes needed?
   
   
   To ensure proper working of optimized bootstrap, when changes on primary 
include only addition of new tables.
   ### Does this PR introduce _any_ user-facing change?
   
   No
   
   ### How was this patch tested?
   
   




Issue Time Tracking
---

Worklog Id: (was: 842166)
Remaining Estimate: 0h
Time Spent: 10m

> Optimized bootstrap does not drop newly added tables at source.
> ---
>
> Key: HIVE-26960
> URL: https://issues.apache.org/jira/browse/HIVE-26960
> Project: Hive
>  Issue Type: Bug
>Reporter: Rakshith C
>Assignee: Rakshith C
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Scenario:
> Replication is setup from DR to PROD after failover from PROD to DR and no 
> existing tables are modified at PROD but a new table is added at PROD.
> Observations:
>  * _bootstrap directory won't be created during second cycle of optimized 
> bootstrap because existing tables were not modified.
>  * Based on this, it will not initialize list of tables to drop at PROD.
>  * This leads to the new table created at PROD not being dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26600) Handle failover during optimized bootstrap

2023-01-29 Thread Rakshith C (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakshith C resolved HIVE-26600.
---
Resolution: Fixed

> Handle failover during optimized bootstrap
> --
>
> Key: HIVE-26600
> URL: https://issues.apache.org/jira/browse/HIVE-26600
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Rakshith C
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> when the reverse policy is enabled from DR to PROD, there is a situation 
> wherein the user may initiate a failover from DR to PROD before the optimized 
> bootstrap is ever run.
> Current observations:
>  * Repl Dump will place a failover ready marker but failover metadata won't 
> be generated.
>  * Repl Load will throw an error since failover will be set to true but 
> failovermetadata is missing.
> Replication fails and we reach an undefined state.
> Fix :
>  * create failover ready marker only during second cycle of optimized 
> bootstrap if possible.
>  * since some tables may need to be bootstrapped, it may take upto 3 cycles 
> before failover from DR to PROD is complete.
>  * if no tables are modified, second dump from DR to PROD will be marked as 
> failover ready.
> Result:
>  * users can initiate a failover immediately after enabling reverse policy 
> without any hassles.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26600) Handle failover during optimized bootstrap

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26600?focusedWorklogId=842164&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842164
 ]

ASF GitHub Bot logged work on HIVE-26600:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 01:47
Start Date: 30/Jan/23 01:47
Worklog Time Spent: 10m 
  Work Description: pudidic merged PR #3991:
URL: https://github.com/apache/hive/pull/3991




Issue Time Tracking
---

Worklog Id: (was: 842164)
Time Spent: 40m  (was: 0.5h)

> Handle failover during optimized bootstrap
> --
>
> Key: HIVE-26600
> URL: https://issues.apache.org/jira/browse/HIVE-26600
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Rakshith C
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> when the reverse policy is enabled from DR to PROD, there is a situation 
> wherein the user may initiate a failover from DR to PROD before the optimized 
> bootstrap is ever run.
> Current observations:
>  * Repl Dump will place a failover ready marker but failover metadata won't 
> be generated.
>  * Repl Load will throw an error since failover will be set to true but 
> failovermetadata is missing.
> Replication fails and we reach an undefined state.
> Fix :
>  * create failover ready marker only during second cycle of optimized 
> bootstrap if possible.
>  * since some tables may need to be bootstrapped, it may take upto 3 cycles 
> before failover from DR to PROD is complete.
>  * if no tables are modified, second dump from DR to PROD will be marked as 
> failover ready.
> Result:
>  * users can initiate a failover immediately after enabling reverse policy 
> without any hassles.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26933) Cleanup dump directory for eventId which was failed in previous dump cycle

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26933?focusedWorklogId=842162&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842162
 ]

ASF GitHub Bot logged work on HIVE-26933:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 01:46
Start Date: 30/Jan/23 01:46
Worklog Time Spent: 10m 
  Work Description: pudidic merged PR #3984:
URL: https://github.com/apache/hive/pull/3984




Issue Time Tracking
---

Worklog Id: (was: 842162)
Time Spent: 3h 20m  (was: 3h 10m)

> Cleanup dump directory for eventId which was failed in previous dump cycle
> --
>
> Key: HIVE-26933
> URL: https://issues.apache.org/jira/browse/HIVE-26933
> Project: Hive
>  Issue Type: Improvement
>Reporter: Harshal Patel
>Assignee: Harshal Patel
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> # If Incremental Dump operation failes while dumping any event id  in the 
> staging directory. Then dump directory for this event id along with file 
> _dumpmetadata  still exists in the dump location. which is getting stored in 
> _events_dump file
>  # When user triggers dump operation for this policy again, It again resumes 
> dumping from failed event id, and tries to dump it again but as that event id 
> directory already created in previous cycle, it fails with the exception
> {noformat}
> [Scheduled Query Executor(schedule:repl_policytest7, execution_id:7181)]: 
> FAILED: Execution Error, return code 4 from 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask. 
> org.apache.hadoop.fs.FileAlreadyExistsException: 
> /warehouse/tablespace/staging/policytest7/dGVzdDc=/14bcf976-662b-4237-b5bb-e7d63a1d089f/hive/137961/_dumpmetadata
>  for client 172.27.182.5 already exists
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:388)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2576)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2473)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:773)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:490)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>     at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
>     at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894){noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26600) Handle failover during optimized bootstrap

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26600?focusedWorklogId=842163&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842163
 ]

ASF GitHub Bot logged work on HIVE-26600:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 01:46
Start Date: 30/Jan/23 01:46
Worklog Time Spent: 10m 
  Work Description: pudidic commented on PR #3991:
URL: https://github.com/apache/hive/pull/3991#issuecomment-1407865648

   +1. LGTM.




Issue Time Tracking
---

Worklog Id: (was: 842163)
Time Spent: 0.5h  (was: 20m)

> Handle failover during optimized bootstrap
> --
>
> Key: HIVE-26600
> URL: https://issues.apache.org/jira/browse/HIVE-26600
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Rakshith C
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> when the reverse policy is enabled from DR to PROD, there is a situation 
> wherein the user may initiate a failover from DR to PROD before the optimized 
> bootstrap is ever run.
> Current observations:
>  * Repl Dump will place a failover ready marker but failover metadata won't 
> be generated.
>  * Repl Load will throw an error since failover will be set to true but 
> failovermetadata is missing.
> Replication fails and we reach an undefined state.
> Fix :
>  * create failover ready marker only during second cycle of optimized 
> bootstrap if possible.
>  * since some tables may need to be bootstrapped, it may take upto 3 cycles 
> before failover from DR to PROD is complete.
>  * if no tables are modified, second dump from DR to PROD will be marked as 
> failover ready.
> Result:
>  * users can initiate a failover immediately after enabling reverse policy 
> without any hassles.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26933) Cleanup dump directory for eventId which was failed in previous dump cycle

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26933?focusedWorklogId=842161&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842161
 ]

ASF GitHub Bot logged work on HIVE-26933:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 01:45
Start Date: 30/Jan/23 01:45
Worklog Time Spent: 10m 
  Work Description: pudidic commented on PR #3984:
URL: https://github.com/apache/hive/pull/3984#issuecomment-1407864826

   +1. LGTM.




Issue Time Tracking
---

Worklog Id: (was: 842161)
Time Spent: 3h 10m  (was: 3h)

> Cleanup dump directory for eventId which was failed in previous dump cycle
> --
>
> Key: HIVE-26933
> URL: https://issues.apache.org/jira/browse/HIVE-26933
> Project: Hive
>  Issue Type: Improvement
>Reporter: Harshal Patel
>Assignee: Harshal Patel
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> # If Incremental Dump operation failes while dumping any event id  in the 
> staging directory. Then dump directory for this event id along with file 
> _dumpmetadata  still exists in the dump location. which is getting stored in 
> _events_dump file
>  # When user triggers dump operation for this policy again, It again resumes 
> dumping from failed event id, and tries to dump it again but as that event id 
> directory already created in previous cycle, it fails with the exception
> {noformat}
> [Scheduled Query Executor(schedule:repl_policytest7, execution_id:7181)]: 
> FAILED: Execution Error, return code 4 from 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask. 
> org.apache.hadoop.fs.FileAlreadyExistsException: 
> /warehouse/tablespace/staging/policytest7/dGVzdDc=/14bcf976-662b-4237-b5bb-e7d63a1d089f/hive/137961/_dumpmetadata
>  for client 172.27.182.5 already exists
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:388)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2576)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2473)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:773)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:490)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>     at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
>     at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894){noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26738) Use spotless-maven-plugin to check and constrain unused imports

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26738?focusedWorklogId=842159&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842159
 ]

ASF GitHub Bot logged work on HIVE-26738:
-

Author: ASF GitHub Bot
Created on: 30/Jan/23 00:18
Start Date: 30/Jan/23 00:18
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on PR #3762:
URL: https://github.com/apache/hive/pull/3762#issuecomment-1407816522

   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.




Issue Time Tracking
---

Worklog Id: (was: 842159)
Time Spent: 50m  (was: 40m)

> Use spotless-maven-plugin to check and constrain unused imports
> ---
>
> Key: HIVE-26738
> URL: https://issues.apache.org/jira/browse/HIVE-26738
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0-alpha-1
>Reporter: weiliang hao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> * There are many unused imports in the current project, and 
> maven-checkstyle-plugin does not play a constraint role
>  * Introducing spotless-maven-plugin to check unused import in the project 
> build phase
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26996) typos in hive-exec

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26996:
--
Labels: pull-request-available  (was: )

> typos in hive-exec
> --
>
> Key: HIVE-26996
> URL: https://issues.apache.org/jira/browse/HIVE-26996
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning, Query Processor
>Affects Versions: All Versions
>Reporter: Michal Lorek
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> typos and grammar errors in hive-exec module



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26996) typos in hive-exec

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26996?focusedWorklogId=842152&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842152
 ]

ASF GitHub Bot logged work on HIVE-26996:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 21:15
Start Date: 29/Jan/23 21:15
Worklog Time Spent: 10m 
  Work Description: mlorek opened a new pull request, #3994:
URL: https://github.com/apache/hive/pull/3994

   ### What changes were proposed in this pull request?
   
   fixed typos and grammar errors
   
   ### Why are the changes needed?
   
   to improved code quality
   
   ### Does this PR introduce _any_ user-facing change?
   
   only in debug level messages due to grammar errors
   
   ### How was this patch tested?
   
   build only




Issue Time Tracking
---

Worklog Id: (was: 842152)
Remaining Estimate: 0h
Time Spent: 10m

> typos in hive-exec
> --
>
> Key: HIVE-26996
> URL: https://issues.apache.org/jira/browse/HIVE-26996
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning, Query Processor
>Affects Versions: All Versions
>Reporter: Michal Lorek
>Priority: Minor
> Fix For: 4.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> typos and grammar errors in hive-exec module



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26916) Disable TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1 (Done as part of HIVE-22942)

2023-01-29 Thread Aman Raj (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Raj resolved HIVE-26916.
-
Resolution: Not A Problem

> Disable TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1 (Done 
> as part of HIVE-22942)
> ---
>
> Key: HIVE-26916
> URL: https://issues.apache.org/jira/browse/HIVE-26916
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26757) Add sfs+ofs support

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26757?focusedWorklogId=842138&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842138
 ]

ASF GitHub Bot logged work on HIVE-26757:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 16:45
Start Date: 29/Jan/23 16:45
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3779:
URL: https://github.com/apache/hive/pull/3779#issuecomment-1407712707

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3779)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3779&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3779&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3779&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3779&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3779&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3779&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3779&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3779&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3779&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3779&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3779&resolved=false&types=CODE_SMELL)
 [0 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3779&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3779&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3779&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 842138)
Time Spent: 1h 40m  (was: 1.5h)

> Add sfs+ofs support
> ---
>
> Key: HIVE-26757
> URL: https://issues.apache.org/jira/browse/HIVE-26757
> Project: Hive
>  Issue Type: Improvement
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/blob/ebb1e2fa9914bcccecad261d53338933b699ccb1/ql/src/java/org/apache/hadoop/hive/ql/io/SingleFileSystem.java#L80]
>  shows SFS support for Ozone's o3fs protocol, but not the newer ofs protocol. 
> Please add support for {{{}sfs+ofs{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26972) orc_map_key_repeating.q failing due to wrong directory structure being commited in HIVE-26819

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26972?focusedWorklogId=842137&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842137
 ]

ASF GitHub Bot logged work on HIVE-26972:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 15:58
Start Date: 29/Jan/23 15:58
Worklog Time Spent: 10m 
  Work Description: abstractdog merged PR #3974:
URL: https://github.com/apache/hive/pull/3974




Issue Time Tracking
---

Worklog Id: (was: 842137)
Time Spent: 50m  (was: 40m)

> orc_map_key_repeating.q failing due to wrong directory structure being 
> commited in HIVE-26819
> -
>
> Key: HIVE-26972
> URL: https://issues.apache.org/jira/browse/HIVE-26972
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> orc_map_key_repeating.q tries to do a diff between the test output and 
> ql/src/test/results/clientpositive/orc_map_key_repeating.q.out but this q.out 
> file does not exist. This was committed in HIVE-26819. This is because there 
> is change in directory structure from oss/master and oss/branch-3. Moving 
> this file to the right directory.
>  
> Tests failing due to this :
> diff: 
> /home/jenkins/agent/workspace/hive-precommit_PR-3929/ql/src/test/results/clientpositive/orc_map_key_repeating.q.out:
>  No such file or directory



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26972) orc_map_key_repeating.q failing due to wrong directory structure being commited in HIVE-26819

2023-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-26972:

Fix Version/s: 3.2.0

> orc_map_key_repeating.q failing due to wrong directory structure being 
> commited in HIVE-26819
> -
>
> Key: HIVE-26972
> URL: https://issues.apache.org/jira/browse/HIVE-26972
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> orc_map_key_repeating.q tries to do a diff between the test output and 
> ql/src/test/results/clientpositive/orc_map_key_repeating.q.out but this q.out 
> file does not exist. This was committed in HIVE-26819. This is because there 
> is change in directory structure from oss/master and oss/branch-3. Moving 
> this file to the right directory.
>  
> Tests failing due to this :
> diff: 
> /home/jenkins/agent/workspace/hive-precommit_PR-3929/ql/src/test/results/clientpositive/orc_map_key_repeating.q.out:
>  No such file or directory



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26972) orc_map_key_repeating.q failing due to wrong directory structure being commited in HIVE-26819

2023-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor resolved HIVE-26972.
-
Resolution: Fixed

> orc_map_key_repeating.q failing due to wrong directory structure being 
> commited in HIVE-26819
> -
>
> Key: HIVE-26972
> URL: https://issues.apache.org/jira/browse/HIVE-26972
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> orc_map_key_repeating.q tries to do a diff between the test output and 
> ql/src/test/results/clientpositive/orc_map_key_repeating.q.out but this q.out 
> file does not exist. This was committed in HIVE-26819. This is because there 
> is change in directory structure from oss/master and oss/branch-3. Moving 
> this file to the right directory.
>  
> Tests failing due to this :
> diff: 
> /home/jenkins/agent/workspace/hive-precommit_PR-3929/ql/src/test/results/clientpositive/orc_map_key_repeating.q.out:
>  No such file or directory



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26972) orc_map_key_repeating.q failing due to wrong directory structure being commited in HIVE-26819

2023-01-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-26972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17681757#comment-17681757
 ] 

László Bodor commented on HIVE-26972:
-

merged to branch-3, thanks [~amanraj2520] for the patch!

> orc_map_key_repeating.q failing due to wrong directory structure being 
> commited in HIVE-26819
> -
>
> Key: HIVE-26972
> URL: https://issues.apache.org/jira/browse/HIVE-26972
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> orc_map_key_repeating.q tries to do a diff between the test output and 
> ql/src/test/results/clientpositive/orc_map_key_repeating.q.out but this q.out 
> file does not exist. This was committed in HIVE-26819. This is because there 
> is change in directory structure from oss/master and oss/branch-3. Moving 
> this file to the right directory.
>  
> Tests failing due to this :
> diff: 
> /home/jenkins/agent/workspace/hive-precommit_PR-3929/ql/src/test/results/clientpositive/orc_map_key_repeating.q.out:
>  No such file or directory



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26972) orc_map_key_repeating.q failing due to wrong directory structure being commited in HIVE-26819

2023-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-26972:

Summary: orc_map_key_repeating.q failing due to wrong directory structure 
being commited in HIVE-26819  (was: Test failing due to wrong directory 
structure being commited in HIVE-26819)

> orc_map_key_repeating.q failing due to wrong directory structure being 
> commited in HIVE-26819
> -
>
> Key: HIVE-26972
> URL: https://issues.apache.org/jira/browse/HIVE-26972
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> orc_map_key_repeating.q tries to do a diff between the test output and 
> ql/src/test/results/clientpositive/orc_map_key_repeating.q.out but this q.out 
> file does not exist. This was committed in HIVE-26819. This is because there 
> is change in directory structure from oss/master and oss/branch-3. Moving 
> this file to the right directory.
>  
> Tests failing due to this :
> diff: 
> /home/jenkins/agent/workspace/hive-precommit_PR-3929/ql/src/test/results/clientpositive/orc_map_key_repeating.q.out:
>  No such file or directory



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26972) Test failing due to wrong directory structure being commited in HIVE-26819

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26972?focusedWorklogId=842136&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842136
 ]

ASF GitHub Bot logged work on HIVE-26972:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 15:56
Start Date: 29/Jan/23 15:56
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on PR #3974:
URL: https://github.com/apache/hive/pull/3974#issuecomment-1407700381

   yes, this can happen since we use llap driver as default on master, makes 
sense




Issue Time Tracking
---

Worklog Id: (was: 842136)
Time Spent: 40m  (was: 0.5h)

> Test failing due to wrong directory structure being commited in HIVE-26819
> --
>
> Key: HIVE-26972
> URL: https://issues.apache.org/jira/browse/HIVE-26972
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> orc_map_key_repeating.q tries to do a diff between the test output and 
> ql/src/test/results/clientpositive/orc_map_key_repeating.q.out but this q.out 
> file does not exist. This was committed in HIVE-26819. This is because there 
> is change in directory structure from oss/master and oss/branch-3. Moving 
> this file to the right directory.
>  
> Tests failing due to this :
> diff: 
> /home/jenkins/agent/workspace/hive-precommit_PR-3929/ql/src/test/results/clientpositive/orc_map_key_repeating.q.out:
>  No such file or directory



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26757) Add sfs+ofs support

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26757?focusedWorklogId=842135&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842135
 ]

ASF GitHub Bot logged work on HIVE-26757:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 15:50
Start Date: 29/Jan/23 15:50
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on PR #3779:
URL: https://github.com/apache/hive/pull/3779#issuecomment-1407699051

   I don't know the implementation details of the original SingleFileSystem, 
but the manner of addition looks convenient, so this patch also looks good to me
   we haven't added tests for specific underlying filesystems so far, if manual 
tests worked for ofs, this is good to go




Issue Time Tracking
---

Worklog Id: (was: 842135)
Time Spent: 1.5h  (was: 1h 20m)

> Add sfs+ofs support
> ---
>
> Key: HIVE-26757
> URL: https://issues.apache.org/jira/browse/HIVE-26757
> Project: Hive
>  Issue Type: Improvement
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/blob/ebb1e2fa9914bcccecad261d53338933b699ccb1/ql/src/java/org/apache/hadoop/hive/ql/io/SingleFileSystem.java#L80]
>  shows SFS support for Ozone's o3fs protocol, but not the newer ofs protocol. 
> Please add support for {{{}sfs+ofs{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26757) Add sfs+ofs support

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26757?focusedWorklogId=842134&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842134
 ]

ASF GitHub Bot logged work on HIVE-26757:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 15:44
Start Date: 29/Jan/23 15:44
Worklog Time Spent: 10m 
  Work Description: MikaelSmith opened a new pull request, #3779:
URL: https://github.com/apache/hive/pull/3779

   Adds sfs+ofs support to mirror sfs+o3fs. ofs is a newer protocol by Ozone 
that's meant to replace o3fs.




Issue Time Tracking
---

Worklog Id: (was: 842134)
Time Spent: 1h 20m  (was: 1h 10m)

> Add sfs+ofs support
> ---
>
> Key: HIVE-26757
> URL: https://issues.apache.org/jira/browse/HIVE-26757
> Project: Hive
>  Issue Type: Improvement
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/blob/ebb1e2fa9914bcccecad261d53338933b699ccb1/ql/src/java/org/apache/hadoop/hive/ql/io/SingleFileSystem.java#L80]
>  shows SFS support for Ozone's o3fs protocol, but not the newer ofs protocol. 
> Please add support for {{{}sfs+ofs{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26400) Provide docker images for Hive

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26400?focusedWorklogId=842133&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842133
 ]

ASF GitHub Bot logged work on HIVE-26400:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 15:41
Start Date: 29/Jan/23 15:41
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on code in PR #3448:
URL: https://github.com/apache/hive/pull/3448#discussion_r1089995000


##
dev-support/docker/Dockerfile:
##
@@ -0,0 +1,53 @@
+#

Review Comment:
   I agree, this looks better in the packaging project





Issue Time Tracking
---

Worklog Id: (was: 842133)
Time Spent: 7h 10m  (was: 7h)

> Provide docker images for Hive
> --
>
> Key: HIVE-26400
> URL: https://issues.apache.org/jira/browse/HIVE-26400
> Project: Hive
>  Issue Type: Sub-task
>  Components: Build Infrastructure
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Blocker
>  Labels: hive-4.0.0-must, pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Make Apache Hive be able to run inside docker container in pseudo-distributed 
> mode, with MySQL/Derby as its back database, provide the following:
>  * Quick-start/Debugging/Prepare a test env for Hive;
>  * Tools to build target image with specified version of Hive and its 
> dependencies;
>  * Images can be used as the basis for the Kubernetes operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26973) Test fix for subquery_subquery_chain.q

2023-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor resolved HIVE-26973.
-
Resolution: Fixed

> Test fix for subquery_subquery_chain.q
> --
>
> Key: HIVE-26973
> URL: https://issues.apache.org/jira/browse/HIVE-26973
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are test failures in subquery_subquery_chain.q. Link to the test 
> failues : 
> [http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3929/4/tests/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26973) Test fix for subquery_subquery_chain.q

2023-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-26973:

Fix Version/s: 3.2.0

> Test fix for subquery_subquery_chain.q
> --
>
> Key: HIVE-26973
> URL: https://issues.apache.org/jira/browse/HIVE-26973
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are test failures in subquery_subquery_chain.q. Link to the test 
> failues : 
> [http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3929/4/tests/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26973) Test fix for subquery_subquery_chain.q

2023-01-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-26973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17681755#comment-17681755
 ] 

László Bodor commented on HIVE-26973:
-

merged to branch-3, thanks [~amanraj2520] for taking care of this!


> Test fix for subquery_subquery_chain.q
> --
>
> Key: HIVE-26973
> URL: https://issues.apache.org/jira/browse/HIVE-26973
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are test failures in subquery_subquery_chain.q. Link to the test 
> failues : 
> [http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3929/4/tests/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26973) Test fix for subquery_subquery_chain.q

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26973?focusedWorklogId=842132&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842132
 ]

ASF GitHub Bot logged work on HIVE-26973:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 15:35
Start Date: 29/Jan/23 15:35
Worklog Time Spent: 10m 
  Work Description: abstractdog merged PR #3975:
URL: https://github.com/apache/hive/pull/3975




Issue Time Tracking
---

Worklog Id: (was: 842132)
Time Spent: 0.5h  (was: 20m)

> Test fix for subquery_subquery_chain.q
> --
>
> Key: HIVE-26973
> URL: https://issues.apache.org/jira/browse/HIVE-26973
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are test failures in subquery_subquery_chain.q. Link to the test 
> failues : 
> [http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-3929/4/tests/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26939) Hive LLAP Application Master fails to come up with Hadoop 3.3.4

2023-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor resolved HIVE-26939.
-
Resolution: Fixed

merged to master, thanks [~amanraj2520] for the patch!

> Hive LLAP Application Master fails to come up with Hadoop 3.3.4
> ---
>
> Key: HIVE-26939
> URL: https://issues.apache.org/jira/browse/HIVE-26939
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-2
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When current oss master hive tries to bring up the LLAP Application Master, 
> it fails with this issue :
> {code:java}
> Executing the launch command\nINFO client.ServiceClient: Loading service 
> definition from local FS: 
> /var/lib/ambari-agent/tmp/llap-yarn-service_2023-01-10_07-56-46/Yarnfile\nERROR
>  utils.JsonSerDeser: Exception while parsing json input 
> stream\ncom.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot 
> deserialize value of type 
> `org.apache.hadoop.yarn.service.api.records.PlacementScope` from String 
> \"NODE\": not one of the values accepted for Enum class: [node, rack]\n at 
> [Source: (org.apache.hadoop.fs.ChecksumFileSystem$FSDataBoundedInputStream); 
> line: 31, column: 22] (through reference chain: 
> org.apache.hadoop.yarn.service.api.records.Service[\"components\"]->java.util.ArrayList[0]->org.apache.hadoop.yarn.service.api.records.Component[\"placement_policy\"]->org.apache.hadoop.yarn.service.api.records.PlacementPolicy[\"constraints\"]->java.util.ArrayList[0]->org.apache.hadoop.yarn.service.api.records.PlacementConstraint[\"scope\"])\n\tat
>  
> com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1851)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:1079)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.EnumDeserializer._deserializeAltString(EnumDeserializer.java:339)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.EnumDeserializer._fromString(EnumDeserializer.java:214)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.EnumDeserializer.deserialize(EnumDeserializer.java:188)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
>  ~[jackson-databind-2.12.7.jar:2.12.7

[jira] [Updated] (HIVE-26939) Hive LLAP Application Master fails to come up with Hadoop 3.3.4

2023-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-26939:

Fix Version/s: 4.0.0-alpha-2

> Hive LLAP Application Master fails to come up with Hadoop 3.3.4
> ---
>
> Key: HIVE-26939
> URL: https://issues.apache.org/jira/browse/HIVE-26939
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-2
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When current oss master hive tries to bring up the LLAP Application Master, 
> it fails with this issue :
> {code:java}
> Executing the launch command\nINFO client.ServiceClient: Loading service 
> definition from local FS: 
> /var/lib/ambari-agent/tmp/llap-yarn-service_2023-01-10_07-56-46/Yarnfile\nERROR
>  utils.JsonSerDeser: Exception while parsing json input 
> stream\ncom.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot 
> deserialize value of type 
> `org.apache.hadoop.yarn.service.api.records.PlacementScope` from String 
> \"NODE\": not one of the values accepted for Enum class: [node, rack]\n at 
> [Source: (org.apache.hadoop.fs.ChecksumFileSystem$FSDataBoundedInputStream); 
> line: 31, column: 22] (through reference chain: 
> org.apache.hadoop.yarn.service.api.records.Service[\"components\"]->java.util.ArrayList[0]->org.apache.hadoop.yarn.service.api.records.Component[\"placement_policy\"]->org.apache.hadoop.yarn.service.api.records.PlacementPolicy[\"constraints\"]->java.util.ArrayList[0]->org.apache.hadoop.yarn.service.api.records.PlacementConstraint[\"scope\"])\n\tat
>  
> com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1851)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:1079)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.EnumDeserializer._deserializeAltString(EnumDeserializer.java:339)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.EnumDeserializer._fromString(EnumDeserializer.java:214)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.EnumDeserializer.deserialize(EnumDeserializer.java:188)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser

[jira] [Work logged] (HIVE-26939) Hive LLAP Application Master fails to come up with Hadoop 3.3.4

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26939?focusedWorklogId=842131&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842131
 ]

ASF GitHub Bot logged work on HIVE-26939:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 15:31
Start Date: 29/Jan/23 15:31
Worklog Time Spent: 10m 
  Work Description: abstractdog merged PR #3941:
URL: https://github.com/apache/hive/pull/3941




Issue Time Tracking
---

Worklog Id: (was: 842131)
Time Spent: 2h 20m  (was: 2h 10m)

> Hive LLAP Application Master fails to come up with Hadoop 3.3.4
> ---
>
> Key: HIVE-26939
> URL: https://issues.apache.org/jira/browse/HIVE-26939
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When current oss master hive tries to bring up the LLAP Application Master, 
> it fails with this issue :
> {code:java}
> Executing the launch command\nINFO client.ServiceClient: Loading service 
> definition from local FS: 
> /var/lib/ambari-agent/tmp/llap-yarn-service_2023-01-10_07-56-46/Yarnfile\nERROR
>  utils.JsonSerDeser: Exception while parsing json input 
> stream\ncom.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot 
> deserialize value of type 
> `org.apache.hadoop.yarn.service.api.records.PlacementScope` from String 
> \"NODE\": not one of the values accepted for Enum class: [node, rack]\n at 
> [Source: (org.apache.hadoop.fs.ChecksumFileSystem$FSDataBoundedInputStream); 
> line: 31, column: 22] (through reference chain: 
> org.apache.hadoop.yarn.service.api.records.Service[\"components\"]->java.util.ArrayList[0]->org.apache.hadoop.yarn.service.api.records.Component[\"placement_policy\"]->org.apache.hadoop.yarn.service.api.records.PlacementPolicy[\"constraints\"]->java.util.ArrayList[0]->org.apache.hadoop.yarn.service.api.records.PlacementConstraint[\"scope\"])\n\tat
>  
> com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:1851)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.DeserializationContext.handleWeirdStringValue(DeserializationContext.java:1079)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.EnumDeserializer._deserializeAltString(EnumDeserializer.java:339)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.EnumDeserializer._fromString(EnumDeserializer.java:214)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.EnumDeserializer.deserialize(EnumDeserializer.java:188)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
>  ~[jackson-databind-2.12.7.jar:2.12.7]\n\tat 
> com.faster

[jira] [Resolved] (HIVE-26991) typos in method and field names

2023-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor resolved HIVE-26991.
-
Resolution: Fixed

> typos in method and field names
> ---
>
> Key: HIVE-26991
> URL: https://issues.apache.org/jira/browse/HIVE-26991
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning, Query Processor
>Reporter: Michal Lorek
>Assignee: Michal Lorek
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> typos in
> ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLTask.java
> [ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java|https://github.com/apache/hive/pull/3945/files#diff-ecf0bd4d24a899907e8d368d37d3f4945fd1a323a9da9b607b8444ab9793140d]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26991) typos in method and field names

2023-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-26991:

Fix Version/s: 4.0.0-alpha-2

> typos in method and field names
> ---
>
> Key: HIVE-26991
> URL: https://issues.apache.org/jira/browse/HIVE-26991
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning, Query Processor
>Reporter: Michal Lorek
>Assignee: Michal Lorek
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> typos in
> ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLTask.java
> [ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java|https://github.com/apache/hive/pull/3945/files#diff-ecf0bd4d24a899907e8d368d37d3f4945fd1a323a9da9b607b8444ab9793140d]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26991) typos in method and field names

2023-01-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-26991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17681753#comment-17681753
 ] 

László Bodor commented on HIVE-26991:
-

merged to master, thanks [~mlorek] for taking care of this!

> typos in method and field names
> ---
>
> Key: HIVE-26991
> URL: https://issues.apache.org/jira/browse/HIVE-26991
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning, Query Processor
>Reporter: Michal Lorek
>Assignee: Michal Lorek
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> typos in
> ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLTask.java
> [ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java|https://github.com/apache/hive/pull/3945/files#diff-ecf0bd4d24a899907e8d368d37d3f4945fd1a323a9da9b607b8444ab9793140d]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26991) typos in method and field names

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26991?focusedWorklogId=842130&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842130
 ]

ASF GitHub Bot logged work on HIVE-26991:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 15:29
Start Date: 29/Jan/23 15:29
Worklog Time Spent: 10m 
  Work Description: abstractdog merged PR #3945:
URL: https://github.com/apache/hive/pull/3945




Issue Time Tracking
---

Worklog Id: (was: 842130)
Time Spent: 0.5h  (was: 20m)

> typos in method and field names
> ---
>
> Key: HIVE-26991
> URL: https://issues.apache.org/jira/browse/HIVE-26991
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning, Query Processor
>Reporter: Michal Lorek
>Assignee: Michal Lorek
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> typos in
> ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLTask.java
> [ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java|https://github.com/apache/hive/pull/3945/files#diff-ecf0bd4d24a899907e8d368d37d3f4945fd1a323a9da9b607b8444ab9793140d]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26991) typos in method and field names

2023-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26991?focusedWorklogId=842129&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-842129
 ]

ASF GitHub Bot logged work on HIVE-26991:
-

Author: ASF GitHub Bot
Created on: 29/Jan/23 15:28
Start Date: 29/Jan/23 15:28
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on PR #3945:
URL: https://github.com/apache/hive/pull/3945#issuecomment-1407693507

   +1
   code quality changes are welcome




Issue Time Tracking
---

Worklog Id: (was: 842129)
Time Spent: 20m  (was: 10m)

> typos in method and field names
> ---
>
> Key: HIVE-26991
> URL: https://issues.apache.org/jira/browse/HIVE-26991
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning, Query Processor
>Reporter: Michal Lorek
>Assignee: Michal Lorek
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> typos in
> ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLTask.java
> [ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java|https://github.com/apache/hive/pull/3945/files#diff-ecf0bd4d24a899907e8d368d37d3f4945fd1a323a9da9b607b8444ab9793140d]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)