[hive] branch master updated: Disable flaky test

2022-06-09 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ac3e8bae3a6 Disable flaky test
ac3e8bae3a6 is described below

commit ac3e8bae3a62e9ad08471aa13df47c9e8667e8c2
Author: Peter Vary 
AuthorDate: Thu Jun 9 18:10:19 2022 +0200

Disable flaky test
---
 .../hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
index e40a0a6bf9a..727ff3e1a39 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
@@ -168,7 +168,7 @@ public class TestHostAffinitySplitLocationProvider {
 return locations;
   }
 
-
+  @org.junit.Ignore("HIVE-26308")
   @Test (timeout = 2)
   public void testConsistentHashingFallback() throws IOException {
 final int LOC_COUNT_TO = 20, SPLIT_COUNT = 500, MAX_MISS_COUNT = 4,



[hive] branch master updated: HIVE-26300: Upgraded Jackson bom version to 2.12.6.1+ to avoid CVE-2020-36518 (#3351) (Sai Hemanth Gantasala reviewed by Zoltan Haindrich and Ayush Saxena)

2022-06-09 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 568ded4b22a HIVE-26300: Upgraded Jackson bom version to 2.12.6.1+ to 
avoid CVE-2020-36518 (#3351) (Sai Hemanth Gantasala reviewed by Zoltan 
Haindrich and Ayush Saxena)
568ded4b22a is described below

commit 568ded4b22a020f4d2d3567f15b287b25a3f2b71
Author: Sai Hemanth Gantasala 
<68923650+saihemanth-cloud...@users.noreply.github.com>
AuthorDate: Thu Jun 9 15:46:56 2022 +0530

HIVE-26300: Upgraded Jackson bom version to 2.12.6.1+ to avoid 
CVE-2020-36518 (#3351) (Sai Hemanth Gantasala reviewed by Zoltan Haindrich and 
Ayush Saxena)
---
 pom.xml  | 2 +-
 standalone-metastore/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/pom.xml b/pom.xml
index fb004de50cf..1c25c659b2a 100644
--- a/pom.xml
+++ b/pom.xml
@@ -142,7 +142,7 @@
 4.5.13
 4.4.13
 2.4.0
-2.12.0
+2.12.7
 2.3.4
 2.3.1
 0.3.2
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index 4935d0ef3e3..394763327a4 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -77,7 +77,7 @@
 19.0
 3.1.0
 2.6.1
-2.12.0
+2.12.7
 5.5.1
 4.13
 5.6.2



[hive] 03/03: HIVE-26296: RuntimeException when executing EXPLAIN CBO JOINCOST on query with JDBC tables (Stamatis Zampetakis, reviewed by Alessandro Solimando, Krisztian Kasa)

2022-06-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit efae863fe010ed5c4b7de1874a336ed93b3c60b8
Author: Stamatis Zampetakis 
AuthorDate: Tue Jun 7 17:02:12 2022 +0200

HIVE-26296: RuntimeException when executing EXPLAIN CBO JOINCOST on query 
with JDBC tables (Stamatis Zampetakis, reviewed by Alessandro Solimando, 
Krisztian Kasa)

Compute selectivity for all types of joins in the same way. There is no
particular reason to throw an exception when the Join operator is not
an instance of HiveJoin.

Closes #3349
---
 data/scripts/q_test_author_book_tables.sql | 19 +
 .../calcite/stats/HiveRelMdSelectivity.java|  5 +-
 .../queries/clientpositive/cbo_jdbc_joincost.q | 34 
 .../clientpositive/llap/cbo_jdbc_joincost.q.out| 93 ++
 4 files changed, 147 insertions(+), 4 deletions(-)

diff --git a/data/scripts/q_test_author_book_tables.sql 
b/data/scripts/q_test_author_book_tables.sql
new file mode 100644
index 000..9b5ff99266b
--- /dev/null
+++ b/data/scripts/q_test_author_book_tables.sql
@@ -0,0 +1,19 @@
+create table author
+(
+id int,
+fname   varchar(20),
+lname   varchar(20)
+);
+insert into author values (1, 'Victor', 'Hugo');
+insert into author values (2, 'Alexandre', 'Dumas');
+
+create table book
+(
+id int,
+title  varchar(100),
+author int
+);
+insert into book
+values (1, 'Les Miserables', 1);
+insert into book
+values (2, 'The Count Of Monte Cristo', 2);
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdSelectivity.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdSelectivity.java
index 2c36d8f14e6..19bd13de9a1 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdSelectivity.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdSelectivity.java
@@ -149,11 +149,8 @@ public class HiveRelMdSelectivity extends RelMdSelectivity 
{
   if (j.isSemiJoin() || (j.getJoinType().equals(JoinRelType.ANTI))) {
 ndvEstimate = Math.min(mq.getRowCount(j.getLeft()),
 ndvEstimate);
-  } else if (j instanceof HiveJoin) {
-ndvEstimate = Math.min(mq.getRowCount(j.getLeft())
-* mq.getRowCount(j.getRight()), ndvEstimate);
   } else {
-throw new RuntimeException("Unexpected Join type: " + 
j.getClass().getName());
+ndvEstimate = Math.min(mq.getRowCount(j.getLeft()) * 
mq.getRowCount(j.getRight()), ndvEstimate);
   }
 }
 
diff --git a/ql/src/test/queries/clientpositive/cbo_jdbc_joincost.q 
b/ql/src/test/queries/clientpositive/cbo_jdbc_joincost.q
new file mode 100644
index 000..7255f3b87b0
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/cbo_jdbc_joincost.q
@@ -0,0 +1,34 @@
+--!qt:database:mysql:q_test_author_book_tables.sql
+CREATE EXTERNAL TABLE author
+(
+id int,
+fname varchar(20),
+lname varchar(20)
+)
+STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
+TBLPROPERTIES (
+"hive.sql.database.type" = "MYSQL",
+"hive.sql.jdbc.driver" = "com.mysql.jdbc.Driver",
+"hive.sql.jdbc.url" = "jdbc:mysql://localhost:3306/qtestDB",
+"hive.sql.dbcp.username" = "root",
+"hive.sql.dbcp.password" = "qtestpassword",
+"hive.sql.table" = "author"
+);
+
+CREATE EXTERNAL TABLE book
+(
+id int,
+title varchar(100),
+author int
+)
+STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
+TBLPROPERTIES (
+"hive.sql.database.type" = "MYSQL",
+"hive.sql.jdbc.driver" = "com.mysql.jdbc.Driver",
+"hive.sql.jdbc.url" = "jdbc:mysql://localhost:3306/qtestDB",
+"hive.sql.dbcp.username" = "root",
+"hive.sql.dbcp.password" = "qtestpassword",
+"hive.sql.table" = "book"
+);
+
+EXPLAIN CBO JOINCOST SELECT a.lname, b.title FROM author a JOIN book b ON 
a.id=b.author;
diff --git a/ql/src/test/results/clientpositive/llap/cbo_jdbc_joincost.q.out 
b/ql/src/test/results/clientpositive/llap/cbo_jdbc_joincost.q.out
new file mode 100644
index 000..0dc3effcef3
--- /dev/null
+++ b/ql/src/test/results/clientpositive/llap/cbo_jdbc_joincost.q.out
@@ -0,0 +1,93 @@
+PREHOOK: query: CREATE EXTERNAL TABLE author
+(
+id int,
+fname varchar(20),
+lname varchar(20)
+)
+STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
+TBLPROPERTIES (
+"hive.sql.database.type" = "MYSQL",
+"hive.sql.jdbc.driver" = "com.mysql.jdbc.Driver",
+"hive.sql.jdbc.url" = "jdbc:mysql://localhost:3306/qtestDB",
+"hive.sql.dbcp.username" = "root",
+"hive.sql.dbcp.password" = "qtestpassword",
+"hive.sql.table" = "author"
+)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@author
+POSTHOOK: query: CREATE EXTERNAL TABLE author
+(
+id int,
+fname varchar(20),
+

[hive] branch master updated (c55318eb586 -> efae863fe01)

2022-06-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from c55318eb586 HIVE-26293: Migrate remaining exclusive DDL operations to 
EXCL_WRITE lock & bug fixes (Denys Kuzmenko, reviewed by Peter Vary)
 new d781701d268 HIVE-26278: Add unit tests for Hive#getPartitionsByNames 
using batching (Stamatis Zampetakis, reviewed by Zoltan Haindrich, Krisztian 
Kasa, Ayush Saxena)
 new 798d25c6126 HIVE-26290: Remove useless calls to 
DateTimeFormatter#withZone without assignment (Stamatis Zampetakis, reviewed by 
Ayush Saxena)
 new efae863fe01 HIVE-26296: RuntimeException when executing EXPLAIN CBO 
JOINCOST on query with JDBC tables (Stamatis Zampetakis, reviewed by Alessandro 
Solimando, Krisztian Kasa)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ...hor_table.sql => q_test_author_book_tables.sql} | 11 +++
 .../calcite/stats/HiveRelMdSelectivity.java|  5 +-
 .../ql/udf/generic/GenericUDFFromUnixTime.java |  2 -
 ...TestHiveMetaStoreClientApiArgumentsChecker.java | 25 ++
 .../queries/clientpositive/cbo_jdbc_joincost.q | 34 
 .../clientpositive/llap/cbo_jdbc_joincost.q.out| 93 ++
 6 files changed, 164 insertions(+), 6 deletions(-)
 copy data/scripts/{q_test_author_table.sql => q_test_author_book_tables.sql} 
(50%)
 create mode 100644 ql/src/test/queries/clientpositive/cbo_jdbc_joincost.q
 create mode 100644 
ql/src/test/results/clientpositive/llap/cbo_jdbc_joincost.q.out



[hive] 02/03: HIVE-26290: Remove useless calls to DateTimeFormatter#withZone without assignment (Stamatis Zampetakis, reviewed by Ayush Saxena)

2022-06-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 798d25c61262d872d756b5c73d38172fe1293207
Author: Stamatis Zampetakis 
AuthorDate: Fri Jun 3 18:46:47 2022 +0200

HIVE-26290: Remove useless calls to DateTimeFormatter#withZone without 
assignment (Stamatis Zampetakis, reviewed by Ayush Saxena)

Closes #3342
---
 .../org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java   | 2 --
 1 file changed, 2 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java 
b/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java
index fb634bc7c97..21081cf7c11 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java
@@ -88,7 +88,6 @@ public class GenericUDFFromUnixTime extends GenericUDF {
 if (timeZone == null) {
   timeZone = SessionState.get() == null ? new 
HiveConf().getLocalTimeZone() : SessionState.get().getConf()
   .getLocalTimeZone();
-  FORMATTER.withZone(timeZone);
 }
 
 return PrimitiveObjectInspectorFactory.writableStringObjectInspector;
@@ -99,7 +98,6 @@ public class GenericUDFFromUnixTime extends GenericUDF {
 if (context != null) {
   String timeZoneStr = HiveConf.getVar(context.getJobConf(), 
HiveConf.ConfVars.HIVE_LOCAL_TIME_ZONE);
   timeZone = TimestampTZUtil.parseTimeZone(timeZoneStr);
-  FORMATTER.withZone(timeZone);
 }
   }
 



[hive] 01/03: HIVE-26278: Add unit tests for Hive#getPartitionsByNames using batching (Stamatis Zampetakis, reviewed by Zoltan Haindrich, Krisztian Kasa, Ayush Saxena)

2022-06-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit d781701d26859b78161514ac237119243f9bd1e3
Author: Stamatis Zampetakis 
AuthorDate: Tue Feb 8 16:56:56 2022 +0100

HIVE-26278: Add unit tests for Hive#getPartitionsByNames using batching 
(Stamatis Zampetakis, reviewed by Zoltan Haindrich, Krisztian Kasa, Ayush 
Saxena)

Ensure that ValidWriteIdList is set when batching is involved in
getPartitionByNames.

Closes #3335
---
 ...TestHiveMetaStoreClientApiArgumentsChecker.java | 25 ++
 1 file changed, 25 insertions(+)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
index 6aefc44c563..175b47c47d8 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
@@ -38,10 +38,13 @@ import org.apache.hadoop.hive.ql.session.SessionState;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
 import org.apache.thrift.TException;
+
+import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -80,6 +83,8 @@ public class TestHiveMetaStoreClientApiArgumentsChecker {
 hive.getConf().set(ValidTxnList.VALID_TXNS_KEY, "1:");
 hive.getConf().set(ValidWriteIdList.VALID_WRITEIDS_KEY, TABLE_NAME + 
":1:");
 hive.getConf().setVar(HiveConf.ConfVars.HIVE_TXN_MANAGER, 
"org.apache.hadoop.hive.ql.lockmgr.TestTxnManager");
+// Pick a small number for the batch size to easily test code with 
multiple batches.
+hive.getConf().setIntVar(HiveConf.ConfVars.METASTORE_BATCH_RETRIEVE_MAX, 
2);
 SessionState.start(hive.getConf());
 SessionState.get().initTxnMgr(hive.getConf());
 Context ctx = new Context(hive.getConf());
@@ -140,6 +145,26 @@ public class TestHiveMetaStoreClientApiArgumentsChecker {
 hive.getPartitionsByNames(t, new ArrayList<>(), true);
   }
 
+  @Test
+  public void testGetPartitionsByNamesWithSingleBatch() throws HiveException {
+hive.getPartitionsByNames(t, Arrays.asList("Greece", "Italy"), true);
+  }
+
+  @Test
+  public void testGetPartitionsByNamesWithMultipleEqualSizeBatches()
+  throws HiveException {
+List names = Arrays.asList("Greece", "Italy", "France", "Spain");
+hive.getPartitionsByNames(t, names, true);
+  }
+
+  @Test
+  public void testGetPartitionsByNamesWithMultipleUnequalSizeBatches()
+  throws HiveException {
+List names =
+Arrays.asList("Greece", "Italy", "France", "Spain", "Hungary");
+hive.getPartitionsByNames(t, names, true);
+  }
+
   @Test
   public void testGetPartitionsByExpr() throws HiveException, TException {
 List partitions = new ArrayList<>();



[hive] branch master updated: HIVE-26293: Migrate remaining exclusive DDL operations to EXCL_WRITE lock & bug fixes (Denys Kuzmenko, reviewed by Peter Vary)

2022-06-09 Thread dkuzmenko
This is an automated email from the ASF dual-hosted git repository.

dkuzmenko pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new c55318eb586 HIVE-26293: Migrate remaining exclusive DDL operations to 
EXCL_WRITE lock & bug fixes (Denys Kuzmenko, reviewed by Peter Vary)
c55318eb586 is described below

commit c55318eb586e81d1589d56b2349333c4a1359459
Author: Denys Kuzmenko 
AuthorDate: Thu Jun 9 10:52:13 2022 +0200

HIVE-26293: Migrate remaining exclusive DDL operations to EXCL_WRITE lock & 
bug fixes (Denys Kuzmenko, reviewed by Peter Vary)

Closes #3103
---
 .../hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java |   3 +-
 .../hive/ql/ddl/table/drop/DropTableAnalyzer.java  |   2 -
 .../storage/skewed/AlterTableSkewedByAnalyzer.java |   4 +
 .../drop/DropMaterializedViewAnalyzer.java |   2 -
 .../apache/hadoop/hive/ql/hooks/WriteEntity.java   | 118 +++
 .../org/apache/hadoop/hive/ql/io/AcidUtils.java|  39 ++--
 .../org/apache/hadoop/hive/ql/metadata/Hive.java   |   9 +-
 .../hadoop/hive/ql/txn/compactor/Cleaner.java  |  13 +-
 .../org/apache/hadoop/hive/ql/TestTxnCommands.java |  53 +++--
 .../apache/hadoop/hive/ql/io/TestAcidUtils.java|  22 ++
 .../ql/lockmgr/DbTxnManagerEndToEndTestBase.java   |   5 +-
 .../hadoop/hive/ql/lockmgr/TestDbTxnManager2.java  | 227 -
 .../hadoop/hive/ql/parse/TestParseUtils.java   |  32 ++-
 .../hadoop/hive/metastore/HiveMetaStoreClient.java |   4 +-
 .../hadoop/hive/metastore/AcidEventListener.java   | 103 +-
 .../apache/hadoop/hive/metastore/HMSHandler.java   |  31 ++-
 .../hadoop/hive/metastore/txn/TxnHandler.java  |  18 +-
 .../apache/hadoop/hive/metastore/txn/TxnStore.java |   4 +
 .../hive/metastore/txn/ThrowingTxnHandler.java |   9 +
 19 files changed, 531 insertions(+), 167 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java
index e070cec99bf..1ed631321cf 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java
@@ -87,7 +87,8 @@ public class MsckAnalyzer extends AbstractFunctionAnalyzer {
 }
 
 if (repair && AcidUtils.isTransactionalTable(table)) {
-  outputs.add(new WriteEntity(table, WriteType.DDL_EXCLUSIVE));
+  outputs.add(new WriteEntity(table, 
AcidUtils.isLocklessReadsEnabled(table, conf) ? 
+  WriteType.DDL_EXCL_WRITE : WriteType.DDL_EXCLUSIVE));
 } else {
   outputs.add(new WriteEntity(table, WriteEntity.WriteType.DDL_SHARED));
 }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/drop/DropTableAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/drop/DropTableAnalyzer.java
index 1a3a77f436f..b36ad17234f 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/drop/DropTableAnalyzer.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/drop/DropTableAnalyzer.java
@@ -35,8 +35,6 @@ import org.apache.hadoop.hive.ql.parse.HiveParser;
 import org.apache.hadoop.hive.ql.parse.ReplicationSpec;
 import org.apache.hadoop.hive.ql.parse.SemanticException;
 
-import static org.apache.hadoop.hive.common.AcidConstants.SOFT_DELETE_TABLE;
-
 /**
  * Analyzer for table dropping commands.
  */
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/skewed/AlterTableSkewedByAnalyzer.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/skewed/AlterTableSkewedByAnalyzer.java
index e02d65d1e33..369e44117f8 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/skewed/AlterTableSkewedByAnalyzer.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/skewed/AlterTableSkewedByAnalyzer.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.hive.ql.ddl.table.AlterTableType;
 import org.apache.hadoop.hive.ql.exec.TaskFactory;
 import org.apache.hadoop.hive.ql.hooks.ReadEntity;
 import org.apache.hadoop.hive.ql.hooks.WriteEntity;
+import org.apache.hadoop.hive.ql.io.AcidUtils;
 import org.apache.hadoop.hive.ql.metadata.Table;
 import org.apache.hadoop.hive.ql.parse.ASTNode;
 import org.apache.hadoop.hive.ql.parse.HiveParser;
@@ -57,6 +58,9 @@ public class AlterTableSkewedByAnalyzer extends 
AbstractAlterTableAnalyzer {
 Table table = getTable(tableName);
 validateAlterTableType(table, AlterTableType.SKEWED_BY, false);
 
+if (AcidUtils.isLocklessReadsEnabled(table, conf)) {
+  throw new UnsupportedOperationException(command.getText());
+}
 inputs.add(new ReadEntity(table));
 outputs.add(new WriteEntity(table, WriteEntity.WriteType.DDL_EXCLUSIVE));
 
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/drop/DropMaterializedViewAnalyzer.java
 

[hive] branch master updated: HIVE-26285: Overwrite database metadata on original source in optimised failover. (Haymant Mangla reviewed by Denys Kuzmenko and Peter Vary) (#3346)

2022-06-09 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new bd8e4052066 HIVE-26285: Overwrite database metadata on original source 
in optimised failover. (Haymant Mangla reviewed by Denys Kuzmenko and Peter 
Vary) (#3346)
bd8e4052066 is described below

commit bd8e4052066e0ea9294defd6d4e87094c667b846
Author: Haymant Mangla <79496857+hmangl...@users.noreply.github.com>
AuthorDate: Thu Jun 9 13:11:05 2022 +0530

HIVE-26285: Overwrite database metadata on original source in optimised 
failover. (Haymant Mangla reviewed by Denys Kuzmenko and Peter Vary) (#3346)
---
 .../parse/TestReplicationOptimisedBootstrap.java   | 13 +-
 .../hadoop/hive/ql/exec/repl/ReplDumpTask.java |  5 +--
 .../hadoop/hive/ql/exec/repl/ReplLoadTask.java | 50 +++---
 3 files changed, 57 insertions(+), 11 deletions(-)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
index 5ccd74f3708..5bd6ac3d362 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
@@ -753,11 +753,13 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 
 // Do a reverse second dump, this should do a bootstrap dump for the 
tables in the table_diff and incremental for
 // rest.
+
+
assertTrue("value1".equals(primary.getDatabase(primaryDbName).getParameters().get("key1")));
 WarehouseInstance.Tuple tuple = replica.dump(replicatedDbName, withClause);
 
 String hiveDumpDir = tuple.dumpLocation + File.separator + 
ReplUtils.REPL_HIVE_BASE_DIR;
 // _bootstrap directory should be created as bootstrap enabled on external 
tables.
-Path dumpPath1 = new Path(hiveDumpDir, INC_BOOTSTRAP_ROOT_DIR_NAME 
+"/metadata/" + replicatedDbName);
+Path dumpPath1 = new Path(hiveDumpDir, INC_BOOTSTRAP_ROOT_DIR_NAME +"/" + 
EximUtil.METADATA_PATH_NAME +"/" + replicatedDbName);
 FileStatus[] listStatus = 
dumpPath1.getFileSystem(conf).listStatus(dumpPath1);
 ArrayList tablesBootstrapped = new ArrayList();
 for (FileStatus file : listStatus) {
@@ -769,6 +771,8 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 // Do a reverse load, this should do a bootstrap load for the tables in 
table_diff and incremental for the rest.
 primary.load(primaryDbName, replicatedDbName, withClause);
 
+
assertFalse("value1".equals(primary.getDatabase(primaryDbName).getParameters().get("key1")));
+
 primary.run("use " + primaryDbName)
 .run("select id from t1")
 .verifyResults(new String[] { "1", "2", "3", "4", "101", "210", "321" 
})
@@ -898,6 +902,8 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 
 // Check the properties on the new target database.
 assertTrue(targetParams.containsKey(TARGET_OF_REPLICATION));
+assertTrue(targetParams.containsKey(CURR_STATE_ID_TARGET.toString()));
+assertTrue(targetParams.containsKey(CURR_STATE_ID_SOURCE.toString()));
 assertFalse(targetParams.containsKey(SOURCE_OF_REPLICATION));
 
 // Check the properties on the new source database.
@@ -1096,7 +1102,10 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 // Do some modifications on original source cluster. The diff 
becomes(tnew_managed, t1, t2, t3)
 primary.run("use " + primaryDbName).run("create table tnew_managed (id 
int)")
 .run("insert into table t1 values (25)").run("insert into table 
tnew_managed values (110)")
-.run("insert into table t2 partition(country='france') values 
('lyon')").run("drop table t3");
+.run("insert into table t2 partition(country='france') values 
('lyon')").run("drop table t3")
+.run("alter database "+ primaryDbName + " set DBPROPERTIES 
('key1'='value1')");
+
+
assertTrue("value1".equals(primary.getDatabase(primaryDbName).getParameters().get("key1")));
 
 // Do some modifications on the target cluster. (t1, t2, t3: bootstrap & 
t4, t5: incremental)
 replica.run("use " + replicatedDbName).run("insert into table t1 values 
(101)")
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java
index bc141943131..b76354eb459 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java
@@ -245,9 +245,6 @@ public class ReplDumpTask extends Task 
implements Serializable {
   String 

[hive] branch HIVE-21160-master-rewrite-update-multiinsert-refac created (now 6b1a876c3df)

2022-06-09 Thread krisztiankasa
This is an automated email from the ASF dual-hosted git repository.

krisztiankasa pushed a change to branch 
HIVE-21160-master-rewrite-update-multiinsert-refac
in repository https://gitbox.apache.org/repos/asf/hive.git


  at 6b1a876c3df HIVE-26268: Upgrade Snappy to 1.1.8.4 (#3326) (Sylwester 
Lachiewicz reviewed by Zoltan Haindrich)

No new revisions were added by this update.