[hive] branch master updated: HIVE-26601: Registering table metric during second load cycle of optimized bootstrap (#3992) (Vinit Patni, reviewed by Teddy Choi)

2023-02-02 Thread tchoi
This is an automated email from the ASF dual-hosted git repository.

tchoi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 870713ce031 HIVE-26601: Registering table metric during second load 
cycle of optimized bootstrap (#3992) (Vinit Patni, reviewed by Teddy Choi)
870713ce031 is described below

commit 870713ce031b346cdd9008a3217d8cc806ea9f7a
Author: vinitpatni 
AuthorDate: Fri Feb 3 13:13:30 2023 +0530

HIVE-26601: Registering table metric during second load cycle of optimized 
bootstrap (#3992) (Vinit Patni, reviewed by Teddy Choi)
---
 .../parse/TestReplicationOptimisedBootstrap.java   | 81 ++
 .../hadoop/hive/ql/exec/repl/ReplLoadWork.java |  6 +-
 .../incremental/IncrementalLoadTasksBuilder.java   |  7 +-
 3 files changed, 90 insertions(+), 4 deletions(-)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
index 4959bacf5ad..a55b7c8a5b4 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
@@ -992,6 +992,87 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationScenariosA
 assertEquals(tableMetric.getTotalCount(), tableDiffEntries.size());
   }
 
+  @Test
+  public void testTblMetricRegisterDuringSecondLoadCycleOfOptimizedBootstrap() 
throws Throwable {
+List withClause = 
ReplicationTestUtils.includeExternalTableClause(false);
+withClause.add("'" + HiveConf.ConfVars.REPLDIR.varname + "'='" + 
primary.repldDir + "'");
+WarehouseInstance.Tuple tuple = primary.run("use " + primaryDbName)
+.run("create table t1_managed (id int) clustered by(id) into 3 
buckets stored as orc " +
+"tblproperties (\"transactional\"=\"true\")")
+.run("insert into table t1_managed values (10)")
+.run("insert into table t1_managed values (20),(31),(42)")
+.dump(primaryDbName, withClause);
+
+// Do the bootstrap load and check all the external & managed tables are 
present.
+replica.load(replicatedDbName, primaryDbName, withClause)
+.run("repl status " + replicatedDbName)
+.verifyResult(tuple.lastReplicationId)
+.run("use " + replicatedDbName)
+.run("show tables")
+.verifyResults(new String[]{"t1_managed"})
+.verifyReplTargetProperty(replicatedDbName);
+
+// Do an incremental dump & load, Add one table which we can drop & an 
empty table as well.
+tuple = primary.run("use " + primaryDbName)
+.run("create table t2_managed (id int) clustered by(id) into 3 
buckets stored as orc " +
+"tblproperties (\"transactional\"=\"true\")")
+.run("insert into table t2_managed values (10)")
+.run("insert into table t2_managed values (20),(31),(42)")
+.dump(primaryDbName, withClause);
+
+replica.load(replicatedDbName, primaryDbName, withClause)
+.run("use " + replicatedDbName)
+.run("show tables")
+.verifyResults(new String[]{"t1_managed", "t2_managed"})
+.verifyReplTargetProperty(replicatedDbName);
+
+primary.run("use " + primaryDbName)
+.run("insert into table t1_managed values (30)")
+.run("insert into table t1_managed values (50),(51),(52)");
+
+// Prepare for reverse replication.
+DistributedFileSystem replicaFs = replica.miniDFSCluster.getFileSystem();
+Path newReplDir = new Path(replica.repldDir + "1");
+replicaFs.mkdirs(newReplDir);
+withClause = ReplicationTestUtils.includeExternalTableClause(false);
+withClause.add("'" + HiveConf.ConfVars.REPLDIR.varname + "'='" + 
newReplDir + "'");
+
+
+// Do a reverse dump
+tuple = replica.dump(replicatedDbName, withClause);
+
+// Check the event ack file got created.
+assertTrue(new Path(tuple.dumpLocation, EVENT_ACK_FILE).toString() + " 
doesn't exist",
+replicaFs.exists(new Path(tuple.dumpLocation, EVENT_ACK_FILE)));
+
+
+// Do a load, this should create a table_diff_complete directory
+primary.load(primaryDbName,replicatedDbName, withClause);
+
+// Check the table diff directory exist.
+assertTrue(new Path(tuple.dumpLocation, 
TABLE_DIFF_COMPLETE_DIRECTORY).toString() + " doesn't exist",
+replicaFs.exists(new Path(tuple.dumpLocation, 
TABLE_DIFF_COMPLETE_DIRECTORY)));
+
+Path dumpPath = new Path(tuple.dumpLocation);
+// Check the table diff has all the modified table, including the dropped 
and empty ones
+HashSet tableDiffEntries = getTablesFromTableDiffFile(dumpPath, 
conf);
+

[hive] branch master updated: HIVE-27004 : DateTimeFormatterBuilder#appendZoneText cannot parse 'UTC+' in Java versions higher than 8. (#4008) (Anmol Sundaram, reviewed by Sai Hemanth G)

2023-02-02 Thread gsaihemanth
This is an automated email from the ASF dual-hosted git repository.

gsaihemanth pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 21607c78d31 HIVE-27004 : DateTimeFormatterBuilder#appendZoneText  
cannot parse 'UTC+' in Java versions higher than 8. (#4008) (Anmol Sundaram, 
reviewed by Sai Hemanth G)
21607c78d31 is described below

commit 21607c78d316a0f4caff29ed209e23a50df45c05
Author: AnmolSun <124231245+anmol...@users.noreply.github.com>
AuthorDate: Fri Feb 3 10:15:25 2023 +0530

HIVE-27004 : DateTimeFormatterBuilder#appendZoneText  cannot parse 'UTC+' 
in Java versions higher than 8. (#4008) (Anmol Sundaram, reviewed by Sai 
Hemanth G)
---
 common/src/java/org/apache/hadoop/hive/common/type/TimestampTZUtil.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/common/src/java/org/apache/hadoop/hive/common/type/TimestampTZUtil.java 
b/common/src/java/org/apache/hadoop/hive/common/type/TimestampTZUtil.java
index e71e0e85228..690a1ea9b3e 100644
--- a/common/src/java/org/apache/hadoop/hive/common/type/TimestampTZUtil.java
+++ b/common/src/java/org/apache/hadoop/hive/common/type/TimestampTZUtil.java
@@ -79,7 +79,7 @@ public class TimestampTZUtil {
 optionalEnd().optionalEnd();
 // Zone part
 builder.optionalStart().appendLiteral(" ").optionalEnd();
-builder.optionalStart().appendZoneText(TextStyle.NARROW).optionalEnd();
+builder.optionalStart().appendZoneOrOffsetId().optionalEnd();
 
 FORMATTER = builder.toFormatter();
   }



[hive] branch master updated: HIVE-26963: Unset repl.faliover.endpoint during second cycle of optimized bootstrap (#4006) (Rakshith Chandraiah, reviewed by Teddy Choi)

2023-02-02 Thread tchoi
This is an automated email from the ASF dual-hosted git repository.

tchoi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new a8151681965 HIVE-26963: Unset repl.faliover.endpoint during second 
cycle of optimized bootstrap (#4006) (Rakshith Chandraiah, reviewed by Teddy 
Choi)
a8151681965 is described below

commit a8151681965ceab430b3d778ad996dd0af560934
Author: Rakshith C <56068841+rakshith...@users.noreply.github.com>
AuthorDate: Fri Feb 3 10:04:56 2023 +0530

HIVE-26963: Unset repl.faliover.endpoint during second cycle of optimized 
bootstrap (#4006) (Rakshith Chandraiah, reviewed by Teddy Choi)
---
 .../parse/TestReplicationOptimisedBootstrap.java   | 63 ++
 .../hadoop/hive/ql/exec/repl/ReplDumpTask.java |  5 +-
 .../hadoop/hive/ql/exec/repl/ReplLoadTask.java |  4 ++
 3 files changed, 71 insertions(+), 1 deletion(-)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
index 182cb966dfc..4959bacf5ad 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
@@ -66,6 +66,7 @@ import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.QUOTA_DONT_SET;
 import static org.apache.hadoop.hdfs.protocol.HdfsConstants.QUOTA_RESET;
 import static 
org.apache.hadoop.hive.common.repl.ReplConst.REPL_ENABLE_BACKGROUND_THREAD;
 import static 
org.apache.hadoop.hive.common.repl.ReplConst.REPL_TARGET_DB_PROPERTY;
+import static 
org.apache.hadoop.hive.common.repl.ReplConst.REPL_FAILOVER_ENDPOINT;
 import static 
org.apache.hadoop.hive.common.repl.ReplConst.TARGET_OF_REPLICATION;
 import static 
org.apache.hadoop.hive.metastore.ReplChangeManager.SOURCE_OF_REPLICATION;
 import static 
org.apache.hadoop.hive.ql.exec.repl.OptimisedBootstrapUtils.EVENT_ACK_FILE;
@@ -1330,4 +1331,66 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationScenariosA
 
assertTrue(MetaStoreUtils.isDbBeingFailedOverAtEndpoint(replica.getDatabase(replicatedDbName),
 MetaStoreUtils.FailoverEndpoint.TARGET));
   }
+  @Test
+  public void testOptimizedBootstrapWithControlledFailover() throws Throwable {
+primary.run("use " + primaryDbName)
+.run("create  table t1 (id string)")
+.run("insert into table t1 values ('A')")
+.dump(primaryDbName);
+replica.load(replicatedDbName, primaryDbName);
+
+primary.dump(primaryDbName);
+replica.load(replicatedDbName, primaryDbName);
+//initiate a controlled failover from primary to replica.
+List failoverConfigs = Arrays.asList("'" + 
HiveConf.ConfVars.HIVE_REPL_FAILOVER_START + "'='true'");
+primary.dump(primaryDbName, failoverConfigs);
+replica.load(replicatedDbName, primaryDbName, failoverConfigs);
+
+primary.run("use " + primaryDbName)
+.run("create  table t3 (id int)")
+.run("insert into t3 values(1),(2),(3)")
+.run("insert into t1 values('B')"); //modify primary after 
failover.
+
+// initiate first cycle of optimized bootstrap
+WarehouseInstance.Tuple reverseDump = replica.run("use " + 
replicatedDbName)
+.run("create table t2 (col int)")
+.run("insert into t2 values(1),(2)")
+.dump(replicatedDbName);
+
+FileSystem fs = new Path(reverseDump.dumpLocation).getFileSystem(conf);
+assertTrue(fs.exists(new Path(reverseDump.dumpLocation, EVENT_ACK_FILE)));
+
+primary.load(primaryDbName, replicatedDbName);
+
+assertEquals(MetaStoreUtils.FailoverEndpoint.SOURCE.toString(),
+
primary.getDatabase(primaryDbName).getParameters().get(REPL_FAILOVER_ENDPOINT));
+
+assertEquals(MetaStoreUtils.FailoverEndpoint.TARGET.toString(),
+
replica.getDatabase(replicatedDbName).getParameters().get(REPL_FAILOVER_ENDPOINT));
+
+assertTrue(fs.exists(new Path(reverseDump.dumpLocation, 
TABLE_DIFF_COMPLETE_DIRECTORY)));
+HashSet tableDiffEntries = getTablesFromTableDiffFile(new 
Path(reverseDump.dumpLocation), conf);
+assertTrue(!tableDiffEntries.isEmpty());
+
+
assertTrue(MetaStoreUtils.isDbBeingFailedOverAtEndpoint(primary.getDatabase(primaryDbName),
+MetaStoreUtils.FailoverEndpoint.SOURCE));
+
assertTrue(MetaStoreUtils.isDbBeingFailedOverAtEndpoint(replica.getDatabase(replicatedDbName),
+MetaStoreUtils.FailoverEndpoint.TARGET));
+
+// second cycle of optimized bootstrap
+reverseDump = replica.dump(replicatedDbName);
+assertTrue(fs.exists(new Path(reverseDump.dumpLocation, 
OptimisedBootstrapUtils.BOOTSTRAP_TABLES_LIST)));
+
+primary.load(primaryDbName, replicatedDbName);

[hive] branch master updated: HIVE-26035: Implement direct SQL for add partitions to improve performance at HMS (#3905)

2023-02-02 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 7bca1b312d1 HIVE-26035: Implement direct SQL for add partitions to 
improve performance at HMS (#3905)
7bca1b312d1 is described below

commit 7bca1b312d135edac8dd5e8f8ca6a1adfdeb5829
Author: Venu Reddy <35334869+venureddy2...@users.noreply.github.com>
AuthorDate: Fri Feb 3 05:32:30 2023 +0530

HIVE-26035: Implement direct SQL for add partitions to improve performance 
at HMS (#3905)

* HIVE-26035: Implement direct SQL for add partitions to improve 
performance at HMS (Venu Reddy reviewed by Zhihua Deng and Saihemanth Gantasala)
---
 .../hadoop/hive/metastore/conf/MetastoreConf.java  |   5 +
 .../hadoop/hive/metastore/DatabaseProduct.java |  42 ++
 .../hadoop/hive/metastore/DirectSqlInsertPart.java | 827 +
 .../hadoop/hive/metastore/MetaStoreDirectSql.java  |  19 +
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  60 +-
 5 files changed, 938 insertions(+), 15 deletions(-)

diff --git 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
index 6f9932dd3fd..bb65e8d1dad 100644
--- 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
+++ 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
@@ -706,6 +706,10 @@ public class MetastoreConf {
 "Default transaction isolation level for identity generation."),
 
DATANUCLEUS_USE_LEGACY_VALUE_STRATEGY("datanucleus.rdbms.useLegacyNativeValueStrategy",
 "datanucleus.rdbms.useLegacyNativeValueStrategy", true, ""),
+DATANUCLEUS_QUERY_SQL_ALLOWALL("datanucleus.query.sql.allowAll", 
"datanucleus.query.sql.allowAll",
+true, "In strict JDO all SQL queries must begin with \"SELECT ...\", 
and consequently it "
++ "is not possible to execute queries that change data. This 
DataNucleus property when set to true allows "
++ "insert, update and delete operations from JDO SQL. Default value is 
true."),
 
 // Parameters for configuring SSL encryption to the database store
 // If DBACCESS_USE_SSL is false, then all other DBACCESS_SSL_* properties 
will be ignored
@@ -1924,6 +1928,7 @@ public class MetastoreConf {
   ConfVars.DATANUCLEUS_PLUGIN_REGISTRY_BUNDLE_CHECK,
   ConfVars.DATANUCLEUS_TRANSACTION_ISOLATION,
   ConfVars.DATANUCLEUS_USE_LEGACY_VALUE_STRATEGY,
+  ConfVars.DATANUCLEUS_QUERY_SQL_ALLOWALL,
   ConfVars.DETACH_ALL_ON_COMMIT,
   ConfVars.IDENTIFIER_FACTORY,
   ConfVars.MANAGER_FACTORY_CLASS,
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DatabaseProduct.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DatabaseProduct.java
index 301949c40f8..3f3d361b9a0 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DatabaseProduct.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DatabaseProduct.java
@@ -691,6 +691,48 @@ public class DatabaseProduct implements Configurable {
 return map;
   }
 
+  /**
+   * Gets the multiple row insert query for the given table with specified 
columns and row format
+   * @param tableName table name to be used in query
+   * @param columns comma separated column names string
+   * @param rowFormat values format string used in the insert query. Format is 
like (?,?...?) and the number of
+   *  question marks in the format is equal to number of 
column names in the columns argument
+   * @param batchCount number of rows in the query
+   * @return database specific multiple row insert query
+   */
+  public String getBatchInsertQuery(String tableName, String columns, String 
rowFormat, int batchCount) {
+StringBuilder sb = new StringBuilder();
+String fixedPart = tableName + " " + columns + " values ";
+String row;
+if (isORACLE()) {
+  sb.append("insert all ");
+  row = "into " + fixedPart + rowFormat + " ";
+} else {
+  sb.append("insert into " + fixedPart);
+  row = rowFormat + ',';
+}
+for (int i = 0; i < batchCount; i++) {
+  sb.append(row);
+}
+if (isORACLE()) {
+  sb.append("select * from dual ");
+}
+sb.setLength(sb.length() - 1);
+return sb.toString();
+  }
+
+  /**
+   * Gets the boolean value specific to database for the given input
+   * @param val boolean value
+   * @return database specific value
+   */
+  public Object getBoolean(boolean val) {
+if (isDERBY()) {
+  return val ? "Y" : "N";
+}
+   

[hive] branch master updated: HIVE-26935: Expose root cause of MetaException in RetryingHMSHandler (Wechar Yu reviewed by Stamatis Zampetakis)

2023-02-02 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ed6f5a88ab3 HIVE-26935: Expose root cause of MetaException in 
RetryingHMSHandler (Wechar Yu reviewed by Stamatis Zampetakis)
ed6f5a88ab3 is described below

commit ed6f5a88ab32f3933ab01b0d8e47378b3218b57e
Author: wecharyu 
AuthorDate: Thu Jan 12 01:38:12 2023 +0800

HIVE-26935: Expose root cause of MetaException in RetryingHMSHandler 
(Wechar Yu reviewed by Stamatis Zampetakis)

Closes #3938
---
 .../hadoop/hive/metastore/RetryingHMSHandler.java  |  7 +++--
 .../metastore/TestRetriesInRetryingHMSHandler.java | 33 ++
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java
index d0cc5b39081..5aac50e8e30 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java
@@ -203,11 +203,12 @@ public class RetryingHMSHandler implements 
InvocationHandler {
 }
   }
 
+  Throwable rootCause = ExceptionUtils.getRootCause(caughtException);
+  String errorMessage = ExceptionUtils.getMessage(caughtException) +
+  (rootCause == null ? "" : ("\nRoot cause: " + rootCause));
   if (retryCount >= retryLimit) {
 LOG.error("HMSHandler Fatal error: " + 
ExceptionUtils.getStackTrace(caughtException));
-MetaException me = new MetaException(caughtException.toString());
-me.initCause(caughtException);
-throw me;
+throw new MetaException(errorMessage);
   }
 
   assert (retryInterval >= 0);
diff --git 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestRetriesInRetryingHMSHandler.java
 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestRetriesInRetryingHMSHandler.java
index f81ce882369..771af9dfd6d 100644
--- 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestRetriesInRetryingHMSHandler.java
+++ 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestRetriesInRetryingHMSHandler.java
@@ -20,15 +20,21 @@ package org.apache.hadoop.hive.metastore;
 
 import java.io.IOException;
 import java.lang.reflect.InvocationTargetException;
+import java.sql.BatchUpdateException;
+import java.sql.SQLException;
+import java.sql.SQLIntegrityConstraintViolationException;
 import java.util.concurrent.TimeUnit;
 
 import javax.jdo.JDOException;
+import javax.jdo.JDOUserException;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.metastore.annotation.MetastoreCheckinTest;
 import org.apache.hadoop.hive.metastore.api.MetaException;
 import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 import org.apache.hadoop.hive.metastore.conf.MetastoreConf.ConfVars;
+import org.datanucleus.exceptions.NucleusDataStoreException;
+import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -108,4 +114,31 @@ public class TestRetriesInRetryingHMSHandler {
 RetryingHMSHandler.getProxy(conf, mockBaseHandler, false);
 Mockito.verify(mockBaseHandler, Mockito.times(2)).init();
   }
+
+  @Test
+  public void testGetRootCauseInMetaException() throws MetaException {
+IHMSHandler mockBaseHandler = Mockito.mock(IHMSHandler.class);
+Mockito.when(mockBaseHandler.getConf()).thenReturn(conf);
+SQLIntegrityConstraintViolationException sqlException =
+new SQLIntegrityConstraintViolationException("Cannot delete or update 
a parent row");
+BatchUpdateException updateException = new 
BatchUpdateException(sqlException);
+NucleusDataStoreException nucleusException = new NucleusDataStoreException(
+"Clear request failed: DELETE FROM `PARTITION_PARAMS` WHERE 
`PART_ID`=?", updateException);
+JDOUserException jdoException = new JDOUserException(
+"One or more instances could not be deleted", nucleusException);
+// SQLIntegrityConstraintViolationException wrapped in 
BatchUpdateException wrapped in
+// NucleusDataStoreException wrapped in JDOUserException wrapped in 
MetaException wrapped in InvocationException
+MetaException me = new MetaException("Dummy exception");
+me.initCause(jdoException);
+InvocationTargetException ex = new InvocationTargetException(me);
+Mockito.doThrow(me).when(mockBaseHandler).getMS();
+
+IHMSHandler retryingHandler = RetryingHMSHandler.getProxy(conf, 

[hive] branch master updated: HIVE-26889 - Implement array_join udf to concatenate the elements of an array with a specified delimiter (#3896)(Taraka Rama Rao Lethavadla, reviewed by Sourabh Badhya, S

2023-02-02 Thread gsaihemanth
This is an automated email from the ASF dual-hosted git repository.

gsaihemanth pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new de7b63fc975 HIVE-26889 - Implement array_join udf to concatenate the 
elements of an array with a specified delimiter (#3896)(Taraka Rama Rao 
Lethavadla, reviewed by Sourabh Badhya, Sai Hemanth)
de7b63fc975 is described below

commit de7b63fc975ddaeab077bb5dc30a83528a210137
Author: tarak271 
AuthorDate: Thu Feb 2 22:43:17 2023 +0530

HIVE-26889 - Implement array_join udf to concatenate the elements of an 
array with a specified delimiter (#3896)(Taraka Rama Rao Lethavadla, reviewed 
by Sourabh Badhya, Sai Hemanth)
---
 .../hadoop/hive/ql/exec/FunctionRegistry.java  |   1 +
 .../hive/ql/udf/generic/GenericUDFArrayJoin.java   |  68 
 .../ql/udf/generic/TestGenericUDFArrayJoin.java|  74 +
 .../test/queries/clientpositive/udf_array_join.q   |  40 +++
 .../clientpositive/llap/show_functions.q.out   |   2 +
 .../clientpositive/llap/udf_array_join.q.out   | 123 +
 6 files changed, 308 insertions(+)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
index 3e633595fc5..cb5aa5b9678 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
@@ -603,6 +603,7 @@ public final class FunctionRegistry {
 system.registerGenericUDF("array_min", GenericUDFArrayMin.class);
 system.registerGenericUDF("array_max", GenericUDFArrayMax.class);
 system.registerGenericUDF("array_distinct", GenericUDFArrayDistinct.class);
+system.registerGenericUDF("array_join", GenericUDFArrayJoin.class);
 system.registerGenericUDF("array_slice", GenericUDFArraySlice.class);
 system.registerGenericUDF("deserialize", GenericUDFDeserialize.class);
 system.registerGenericUDF("sentences", GenericUDFSentences.class);
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFArrayJoin.java 
b/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFArrayJoin.java
new file mode 100644
index 000..a5ffef0519d
--- /dev/null
+++ b/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFArrayJoin.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.udf.generic;
+
+import com.google.common.base.Joiner;
+import org.apache.hadoop.hive.ql.exec.Description;
+import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.objectinspector.ListObjectInspector;
+import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
+import 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;
+import org.apache.hadoop.io.Text;
+
+import java.util.List;
+
+/**
+ * GenericUDFArrayjoin.
+ */
+@Description(name = "array_join", value = "_FUNC_(array, delimiter, 
replaceNull) - concatenate the elements of an array with a specified 
delimiter", extended =
+"Example:\n" + "  > SELECT _FUNC_(array(1, 2, 3,4), ',') FROM src LIMIT 
1;\n" + "  1,2,3,4\n"
++ "  > SELECT _FUNC_(array(1, 2, NULL, 4), ',',':') FROM src LIMIT 
1;\n"
++ "  1,2,:,4") public class GenericUDFArrayJoin extends 
AbstractGenericUDFArrayBase {
+  private static final int SEPARATOR_IDX = 1;
+  private static final int REPLACE_NULL_IDX = 2;
+  private final Text result = new Text();
+
+  public GenericUDFArrayJoin() {
+super("ARRAY_JOIN", 2, 3, ObjectInspector.Category.PRIMITIVE);
+  }
+
+  @Override public ObjectInspector initialize(ObjectInspector[] arguments) 
throws UDFArgumentException {
+super.initialize(arguments);
+return PrimitiveObjectInspectorFactory.writableStringObjectInspector;
+  }
+
+  @Override public Object evaluate(DeferredObject[] arguments) throws 
HiveException {
+
+Object array = arguments[ARRAY_IDX].get();
+
+if (arrayOI.getListLength(array) <= 0) {
+  

[hive] branch master updated (57d15cb42f2 -> 8b94142d713)

2023-02-02 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from 57d15cb42f2 HIVE-27010: Reduce compilation time (#4005)
 add 8b94142d713 HIVE-26757: Add sfs+ofs support (#3779) (Michael Smith 
reviewed by Laszlo Bodor and Zoltan Haindrich)

No new revisions were added by this update.

Summary of changes:
 ql/src/java/org/apache/hadoop/hive/ql/io/SingleFileSystem.java | 3 +++
 .../main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem   | 1 +
 2 files changed, 4 insertions(+)