[hive] branch branch-2.3 updated: HIVE-26882: Allow transactional check of Table parameter before altering the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho) (#3947)

2023-01-15 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new eaea1f0e1e8 HIVE-26882: Allow transactional check of Table parameter 
before altering the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran 
and Szehon Ho) (#3947)
eaea1f0e1e8 is described below

commit eaea1f0e1e84f9f66ba9f08cbe5d3c491d5f14c7
Author: pvary 
AuthorDate: Sun Jan 15 10:32:06 2023 +0100

HIVE-26882: Allow transactional check of Table parameter before altering 
the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho) 
(#3947)

* HIVE-17981 Create a set of builders for Thrift classes.  This closes 
#274.  (Alan Gates, reviewed by Peter Vary)

* HIVE-18355: Add builder for metastore Thrift classes missed in the first 
pass - FunctionBuilder (Peter Vary, reviewed by Alan Gates)

* HIVE-18372: Create testing infra to test different HMS instances (Peter 
Vary, reviewed by Marta Kuczora, Vihang Karajgaonkar and Adam Szita)

* HIVE-26882: Allow transactional check of Table parameter before altering 
the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho)

Co-authored-by: Alan Gates 
Co-authored-by: Peter Vary 
Co-authored-by: Peter Vary 
---
 .../hcatalog/listener/DummyRawStoreFailEvent.java  |   5 +
 metastore/if/hive_metastore.thrift |   3 +
 .../thrift/gen-cpp/hive_metastore_constants.cpp|   4 +
 .../gen/thrift/gen-cpp/hive_metastore_constants.h  |   2 +
 .../metastore/api/hive_metastoreConstants.java |   4 +
 .../src/gen/thrift/gen-php/metastore/Types.php |  10 +
 .../gen/thrift/gen-py/hive_metastore/constants.py  |   2 +
 .../gen/thrift/gen-rb/hive_metastore_constants.rb  |   4 +
 .../metastore/DefaultPartitionExpressionProxy.java |  56 ++
 .../hadoop/hive/metastore/HiveAlterHandler.java|  20 +-
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  28 +-
 .../org/apache/hadoop/hive/metastore/RawStore.java |  14 +-
 .../client/builder/ConstraintBuilder.java  |  98 
 .../metastore/client/builder/DatabaseBuilder.java  |  88 +++
 .../metastore/client/builder/FunctionBuilder.java  | 115 
 .../GrantRevokePrivilegeRequestBuilder.java|  63 +++
 .../client/builder/HiveObjectPrivilegeBuilder.java |  63 +++
 .../client/builder/HiveObjectRefBuilder.java   |  64 +++
 .../metastore/client/builder/IndexBuilder.java | 104 
 .../metastore/client/builder/PartitionBuilder.java | 102 
 .../client/builder/PrivilegeGrantInfoBuilder.java  |  83 +++
 .../hive/metastore/client/builder/RoleBuilder.java |  55 ++
 .../client/builder/SQLForeignKeyBuilder.java   |  83 +++
 .../client/builder/SQLPrimaryKeyBuilder.java   |  42 ++
 .../client/builder/StorageDescriptorBuilder.java   | 210 +++
 .../metastore/client/builder/TableBuilder.java | 155 +
 .../hadoop/hive/metastore/hbase/HBaseStore.java|   6 +
 .../hadoop/hive/metastore/utils/SecurityUtils.java | 313 +++
 .../metastore/DummyRawStoreControlledCommit.java   |   5 +
 .../metastore/DummyRawStoreForJdoConnection.java   |   6 +
 .../metastore/client/MetaStoreFactoryForTests.java | 107 
 .../hive/metastore/client/TestDatabases.java   | 622 +
 .../client/TestTablesCreateDropAlterTruncate.java  | 232 
 .../minihms/AbstractMetaStoreService.java  | 153 +
 .../minihms/ClusterMetaStoreForTests.java  |  33 ++
 .../minihms/EmbeddedMetaStoreForTests.java |  34 ++
 .../hadoop/hive/metastore/minihms/MiniHMS.java |  69 +++
 .../metastore/minihms/RemoteMetaStoreForTests.java |  41 ++
 38 files changed, 3090 insertions(+), 8 deletions(-)

diff --git 
a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
 
b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
index 9871083ac74..a5cc94e846c 100644
--- 
a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
+++ 
b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
@@ -119,6 +119,11 @@ public class DummyRawStoreFailEvent implements RawStore, 
Configurable {
 return objectStore.openTransaction();
   }
 
+  @Override
+  public boolean openTransaction(String isolationLevel) {
+return objectStore.openTransaction(isolationLevel);
+  }
+
   @Override
   public void rollbackTransaction() {
 objectStore.rollbackTransaction();
diff --git a/metastore/if/hive_metastore.thrift 
b/metastore/if/hive_metastore.thrift
index 9df27319ffa..161fd8c5261 100755
--- a/metastore/if/hive_metastore.thrift
+++ b/metastore/if/hive_metastore.thrift
@@ -1548,3 +1548,6 @@ const string TABLE_NO_AUTO_COMPACT = "no_auto_compaction",
 co

[hive] branch branch-2 updated: HIVE-26882: Allow transactional check of Table parameter before altering the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho) (#3946)

2023-01-15 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 659aa5d94ff HIVE-26882: Allow transactional check of Table parameter 
before altering the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran 
and Szehon Ho) (#3946)
659aa5d94ff is described below

commit 659aa5d94ffc15696c4729694ba68a77a58c2074
Author: pvary 
AuthorDate: Sun Jan 15 10:31:37 2023 +0100

HIVE-26882: Allow transactional check of Table parameter before altering 
the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho) 
(#3946)

* HIVE-17981 Create a set of builders for Thrift classes.  This closes 
#274.  (Alan Gates, reviewed by Peter Vary)

* HIVE-18355: Add builder for metastore Thrift classes missed in the first 
pass - FunctionBuilder (Peter Vary, reviewed by Alan Gates)

* HIVE-18372: Create testing infra to test different HMS instances (Peter 
Vary, reviewed by Marta Kuczora, Vihang Karajgaonkar and Adam Szita)

* HIVE-26882: Allow transactional check of Table parameter before altering 
the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho)

Co-authored-by: Alan Gates 
Co-authored-by: Peter Vary 
Co-authored-by: Peter Vary 
---
 metastore/if/hive_metastore.thrift |   3 +
 .../thrift/gen-cpp/hive_metastore_constants.cpp|   4 +
 .../gen/thrift/gen-cpp/hive_metastore_constants.h  |   2 +
 .../metastore/api/hive_metastoreConstants.java |   4 +
 .../src/gen/thrift/gen-php/metastore/Types.php |  10 +
 .../gen/thrift/gen-py/hive_metastore/constants.py  |   2 +
 .../gen/thrift/gen-rb/hive_metastore_constants.rb  |   4 +
 .../metastore/DefaultPartitionExpressionProxy.java |  56 ++
 .../hadoop/hive/metastore/HiveAlterHandler.java|  19 +-
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  28 +-
 .../org/apache/hadoop/hive/metastore/RawStore.java |  16 +-
 .../client/builder/ConstraintBuilder.java  |  98 
 .../metastore/client/builder/DatabaseBuilder.java  |  88 +++
 .../metastore/client/builder/FunctionBuilder.java  | 115 
 .../GrantRevokePrivilegeRequestBuilder.java|  63 +++
 .../client/builder/HiveObjectPrivilegeBuilder.java |  63 +++
 .../client/builder/HiveObjectRefBuilder.java   |  63 +++
 .../metastore/client/builder/IndexBuilder.java | 104 
 .../metastore/client/builder/PartitionBuilder.java | 102 
 .../client/builder/PrivilegeGrantInfoBuilder.java  |  83 +++
 .../hive/metastore/client/builder/RoleBuilder.java |  55 ++
 .../client/builder/SQLForeignKeyBuilder.java   |  83 +++
 .../client/builder/SQLPrimaryKeyBuilder.java   |  42 ++
 .../client/builder/StorageDescriptorBuilder.java   | 210 +++
 .../metastore/client/builder/TableBuilder.java | 155 ++
 .../hadoop/hive/metastore/utils/SecurityUtils.java | 313 +++
 .../metastore/client/MetaStoreFactoryForTests.java | 107 
 .../hive/metastore/client/TestDatabases.java   | 617 +
 .../client/TestTablesCreateDropAlterTruncate.java  | 228 
 .../minihms/AbstractMetaStoreService.java  | 153 +
 .../minihms/ClusterMetaStoreForTests.java  |  33 ++
 .../minihms/EmbeddedMetaStoreForTests.java |  34 ++
 .../hadoop/hive/metastore/minihms/MiniHMS.java |  69 +++
 .../metastore/minihms/RemoteMetaStoreForTests.java |  41 ++
 34 files changed, 3059 insertions(+), 8 deletions(-)

diff --git a/metastore/if/hive_metastore.thrift 
b/metastore/if/hive_metastore.thrift
index ba117f5ccd5..b4f3166db29 100755
--- a/metastore/if/hive_metastore.thrift
+++ b/metastore/if/hive_metastore.thrift
@@ -1550,3 +1550,6 @@ const string TABLE_NO_AUTO_COMPACT = "no_auto_compaction",
 const string TABLE_TRANSACTIONAL_PROPERTIES = "transactional_properties",
 
 
+// Keys for alter table environment context parameters
+const string EXPECTED_PARAMETER_KEY = "expected_parameter_key",
+const string EXPECTED_PARAMETER_VALUE = "expected_parameter_value",
\ No newline at end of file
diff --git a/metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp 
b/metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
index 1cbd176597b..a24bfd86f8d 100644
--- a/metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
+++ b/metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
@@ -59,6 +59,10 @@ hive_metastoreConstants::hive_metastoreConstants() {
 
   TABLE_TRANSACTIONAL_PROPERTIES = "transactional_properties";
 
+  EXPECTED_PARAMETER_KEY = "expected_parameter_key";
+
+  EXPECTED_PARAMETER_VALUE = "expected_parameter_value";
+
 }
 
 }}} // namespace
diff --git a/metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.h 
b/metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.h
index 

[hive] branch branch-3.1 updated: HIVE-26882: Allow transactional check of Table parameter before altering the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho) (#3944)

2023-01-15 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new eed1f99ac71 HIVE-26882: Allow transactional check of Table parameter 
before altering the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran 
and Szehon Ho) (#3944)
eed1f99ac71 is described below

commit eed1f99ac71d89b01c67f81c3a002996c44dddc1
Author: pvary 
AuthorDate: Sun Jan 15 09:14:04 2023 +0100

HIVE-26882: Allow transactional check of Table parameter before altering 
the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho) 
(#3944)
---
 .../thrift/gen-cpp/hive_metastore_constants.cpp|   4 +
 .../gen/thrift/gen-cpp/hive_metastore_constants.h  |   2 +
 .../metastore/api/hive_metastoreConstants.java |   4 +
 .../src/gen/thrift/gen-php/metastore/Types.php |  10 ++
 .../gen/thrift/gen-py/hive_metastore/constants.py  |   2 +
 .../gen/thrift/gen-rb/hive_metastore_constants.rb  |   4 +
 .../hadoop/hive/metastore/HiveAlterHandler.java|  19 +++-
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  28 +-
 .../org/apache/hadoop/hive/metastore/RawStore.java |  16 +++-
 .../src/main/thrift/hive_metastore.thrift  |   3 +
 .../client/TestTablesCreateDropAlterTruncate.java  | 102 +
 11 files changed, 186 insertions(+), 8 deletions(-)

diff --git 
a/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp 
b/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
index 1c1b3ce5eeb..875e272eb4a 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
@@ -61,6 +61,10 @@ hive_metastoreConstants::hive_metastoreConstants() {
 
   TABLE_BUCKETING_VERSION = "bucketing_version";
 
+  EXPECTED_PARAMETER_KEY = "expected_parameter_key";
+
+  EXPECTED_PARAMETER_VALUE = "expected_parameter_value";
+
 }
 
 }}} // namespace
diff --git 
a/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.h 
b/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.h
index 1f062530e4d..4cffc9491fb 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.h
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.h
@@ -40,6 +40,8 @@ class hive_metastoreConstants {
   std::string TABLE_NO_AUTO_COMPACT;
   std::string TABLE_TRANSACTIONAL_PROPERTIES;
   std::string TABLE_BUCKETING_VERSION;
+  std::string EXPECTED_PARAMETER_KEY;
+  std::string EXPECTED_PARAMETER_VALUE;
 };
 
 extern const hive_metastoreConstants g_hive_metastore_constants;
diff --git 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/hive_metastoreConstants.java
 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/hive_metastoreConstants.java
index 2ee81df1dc1..76be19a7d0e 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/hive_metastoreConstants.java
+++ 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/hive_metastoreConstants.java
@@ -86,4 +86,8 @@ import org.slf4j.LoggerFactory;
 
   public static final String TABLE_BUCKETING_VERSION = "bucketing_version";
 
+  public static final String EXPECTED_PARAMETER_KEY = "expected_parameter_key";
+
+  public static final String EXPECTED_PARAMETER_VALUE = 
"expected_parameter_value";
+
 }
diff --git a/standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php 
b/standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php
index 84f7e3320c8..8c65e346a27 100644
--- a/standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php
+++ b/standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php
@@ -31377,6 +31377,8 @@ final class Constant extends \Thrift\Type\TConstant {
   static protected $TABLE_NO_AUTO_COMPACT;
   static protected $TABLE_TRANSACTIONAL_PROPERTIES;
   static protected $TABLE_BUCKETING_VERSION;
+  static protected $EXPECTED_PARAMETER_KEY;
+  static protected $EXPECTED_PARAMETER_VALUE;
 
   static protected function init_DDL_TIME() {
 return "transient_lastDdlTime";
@@ -31477,6 +31479,14 @@ final class Constant extends \Thrift\Type\TConstant {
   static protected function init_TABLE_BUCKETING_VERSION() {
 return "bucketing_version";
   }
+
+  static protected function init_EXPECTED_PARAMETER_KEY() {
+return "expected_parameter_key";
+  }
+
+  static protected function init_EXPECTED_PARAMETER_VALUE() {
+return "expected_parameter_value";
+  }
 }
 
 
diff --git 
a/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/constants.py 
b/standalone-metastore/src/gen/thrift/gen-py/hive_metastore

[hive] branch branch-3 updated (a25392b2857 -> 333e51e99ab)

2023-01-15 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hive.git


from a25392b2857 HIVE-26840 : Branch 3 netty upgrade to 4.1.69.Final. 
(#3859) - branch-3 netty exclusions, 4.1.69.Final upgrade was already present 
(Aman Raj reviewed by Laszlo Bodor, Chris Nauroth)
 add 333e51e99ab HIVE-26882: Allow transactional check of Table parameter 
before altering the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran 
and Szehon Ho) (#3943)

No new revisions were added by this update.

Summary of changes:
 .../thrift/gen-cpp/hive_metastore_constants.cpp|   4 +
 .../gen/thrift/gen-cpp/hive_metastore_constants.h  |   2 +
 .../metastore/api/hive_metastoreConstants.java |   4 +
 .../src/gen/thrift/gen-php/metastore/Types.php |  10 ++
 .../gen/thrift/gen-py/hive_metastore/constants.py  |   2 +
 .../gen/thrift/gen-rb/hive_metastore_constants.rb  |   4 +
 .../hadoop/hive/metastore/HiveAlterHandler.java|  19 +++-
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  28 +-
 .../org/apache/hadoop/hive/metastore/RawStore.java |  16 +++-
 .../src/main/thrift/hive_metastore.thrift  |   3 +
 .../client/TestTablesCreateDropAlterTruncate.java  | 102 +
 11 files changed, 186 insertions(+), 8 deletions(-)



[hive] branch master updated: HIVE-26882: Allow transactional check of Table parameter before altering the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho)

2023-01-08 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 7a5bc54b614 HIVE-26882: Allow transactional check of Table parameter 
before altering the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran 
and Szehon Ho)
7a5bc54b614 is described below

commit 7a5bc54b614f5a14bcf99e63d544db84d2463253
Author: pvary 
AuthorDate: Sun Jan 8 23:02:27 2023 +0100

HIVE-26882: Allow transactional check of Table parameter before altering 
the Table (#3888) (Peter Vary reviewed by Prasanth Jayachandran and Szehon Ho)
---
 .../thrift/gen-cpp/hive_metastore_constants.cpp|   4 +
 .../gen/thrift/gen-cpp/hive_metastore_constants.h  |   2 +
 .../gen/thrift/gen-cpp/hive_metastore_types.cpp|  44 +
 .../src/gen/thrift/gen-cpp/hive_metastore_types.h  |  22 ++-
 .../hive/metastore/api/AlterTableRequest.java  | 220 -
 .../metastore/api/hive_metastoreConstants.java |   7 +-
 .../thrift/gen-php/metastore/AlterTableRequest.php |  48 +
 .../src/gen/thrift/gen-php/metastore/Constant.php  |  12 ++
 .../gen/thrift/gen-py/hive_metastore/constants.py  |   2 +
 .../src/gen/thrift/gen-py/hive_metastore/ttypes.py |  26 ++-
 .../gen/thrift/gen-rb/hive_metastore_constants.rb  |   4 +
 .../src/gen/thrift/gen-rb/hive_metastore_types.rb  |   6 +-
 .../src/main/thrift/hive_metastore.thrift  |   8 +-
 .../apache/hadoop/hive/metastore/HMSHandler.java   |  20 +-
 .../hadoop/hive/metastore/HiveAlterHandler.java|  19 +-
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  28 ++-
 .../org/apache/hadoop/hive/metastore/RawStore.java |  16 +-
 .../client/TestTablesCreateDropAlterTruncate.java  | 103 ++
 18 files changed, 566 insertions(+), 25 deletions(-)

diff --git 
a/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
 
b/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
index 16a9bf7f3c6..4dbdcc51047 100644
--- 
a/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
+++ 
b/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp
@@ -91,6 +91,10 @@ hive_metastoreConstants::hive_metastoreConstants() {
 
   WRITE_ID = "writeId";
 
+  EXPECTED_PARAMETER_KEY = "expected_parameter_key";
+
+  EXPECTED_PARAMETER_VALUE = "expected_parameter_value";
+
 }
 
 }}} // namespace
diff --git 
a/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_constants.h
 
b/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_constants.h
index 2097a8e7f00..df7dd187fcf 100644
--- 
a/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_constants.h
+++ 
b/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_constants.h
@@ -55,6 +55,8 @@ class hive_metastoreConstants {
   std::string DEFAULT_TABLE_TYPE;
   std::string TXN_ID;
   std::string WRITE_ID;
+  std::string EXPECTED_PARAMETER_KEY;
+  std::string EXPECTED_PARAMETER_VALUE;
 };
 
 extern const hive_metastoreConstants g_hive_metastore_constants;
diff --git 
a/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
 
b/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
index 77fc3ec076d..a482833ebec 100644
--- 
a/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
+++ 
b/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
@@ -45654,6 +45654,16 @@ void 
AlterTableRequest::__set_processorIdentifier(const std::string& val) {
   this->processorIdentifier = val;
 __isset.processorIdentifier = true;
 }
+
+void AlterTableRequest::__set_expectedParameterKey(const std::string& val) {
+  this->expectedParameterKey = val;
+__isset.expectedParameterKey = true;
+}
+
+void AlterTableRequest::__set_expectedParameterValue(const std::string& val) {
+  this->expectedParameterValue = val;
+__isset.expectedParameterValue = true;
+}
 std::ostream& operator<<(std::ostream& out, const AlterTableRequest& obj)
 {
   obj.printTo(out);
@@ -45769,6 +45779,22 @@ uint32_t 
AlterTableRequest::read(::apache::thrift::protocol::TProtocol* iprot) {
   xfer += iprot->skip(ftype);
 }
 break;
+  case 10:
+if (ftype == ::apache::thrift::protocol::T_STRING) {
+  xfer += iprot->readString(this->expectedParameterKey);
+  this->__isset.expectedParameterKey = true;
+} else {
+  xfer += iprot->skip(ftype);
+}
+break;
+  case 11:
+if (ftype == ::apache::thrift::protocol::T_STRING) {
+  xfer += iprot->readString(this->expectedParameterValue);
+  thi

[hive] branch master updated: HIVE-26355: Column compare should be case insensitive for name (Wechar Yu reviewed by Peter Vary) (#3406)

2022-06-30 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new c4dc616a1c HIVE-26355: Column compare should be case insensitive for 
name (Wechar Yu reviewed by Peter Vary) (#3406)
c4dc616a1c is described below

commit c4dc616a1c86d4564a385d8c988138942b7853a9
Author: Wechar Yu 
AuthorDate: Fri Jul 1 13:28:02 2022 +0800

HIVE-26355: Column compare should be case insensitive for name (Wechar Yu 
reviewed by Peter Vary) (#3406)
---
 .../hive/metastore/utils/MetaStoreServerUtils.java | 31 +++---
 .../metastore/utils/TestMetaStoreServerUtils.java  | 28 +++
 2 files changed, 56 insertions(+), 3 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreServerUtils.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreServerUtils.java
index 0d5aaad455..8793c8c7c6 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreServerUtils.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreServerUtils.java
@@ -505,21 +505,46 @@ public class MetaStoreServerUtils {
 params.remove(StatsSetupConst.NUM_ERASURE_CODED_FILES);
   }
 
+  /**
+   * Compare the names, types and comments of two lists of {@link FieldSchema}.
+   * 
+   * The name of {@link FieldSchema} is compared in the case-insensitive mode
+   * because all names in Hive are case-insensitive.
+   *
+   * @param oldCols old columns
+   * @param newCols new columns
+   * @return true if the two columns are the same, false otherwise
+   */
   public static boolean areSameColumns(List oldCols, 
List newCols) {
-return ListUtils.isEqualList(oldCols, newCols);
+if (oldCols == newCols) {
+  return true;
+}
+if (oldCols == null || newCols == null || oldCols.size() != 
newCols.size()) {
+  return false;
+}
+// We should ignore the case of field names, because some computing 
engines are case-sensitive, such as Spark.
+List transformedOldCols = oldCols.stream()
+.map(col -> new FieldSchema(col.getName().toLowerCase(), 
col.getType(), col.getComment()))
+.collect(Collectors.toList());
+List transformedNewCols = newCols.stream()
+.map(col -> new FieldSchema(col.getName().toLowerCase(), 
col.getType(), col.getComment()))
+.collect(Collectors.toList());
+return ListUtils.isEqualList(transformedOldCols, transformedNewCols);
   }
 
   /**
* Returns true if p is a prefix of s.
+   * 
+   * The compare of {@link FieldSchema} is the same as {@link 
#areSameColumns(List, List)}.
*/
   public static boolean arePrefixColumns(List p, 
List s) {
 if (p == s) {
   return true;
 }
-if (p.size() > s.size()) {
+if (p == null || s == null || p.size() > s.size()) {
   return false;
 }
-return ListUtils.isEqualList(p, s.subList(0, p.size()));
+return areSameColumns(p, s.subList(0, p.size()));
   }
 
   public static void updateBasicState(EnvironmentContext environmentContext, 
Map
diff --git 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/utils/TestMetaStoreServerUtils.java
 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/utils/TestMetaStoreServerUtils.java
index c6597eb94b..7a90e1571d 100644
--- 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/utils/TestMetaStoreServerUtils.java
+++ 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/utils/TestMetaStoreServerUtils.java
@@ -884,5 +884,33 @@ public class TestMetaStoreServerUtils {
 sd.setLocation("s3a://bucket/other_path");
 Assert.assertTrue(MetaStoreUtils.validateTblStorage(sd));
   }
+
+  @Test
+  public void testSameColumns() {
+FieldSchema col1 = new FieldSchema("col1", "string", "col1 comment");
+FieldSchema Col1 = new FieldSchema("Col1", "string", "col1 comment");
+FieldSchema col2 = new FieldSchema("col2", "string", "col2 comment");
+Assert.assertTrue(MetaStoreServerUtils.areSameColumns(null, null));
+
Assert.assertFalse(MetaStoreServerUtils.areSameColumns(Arrays.asList(col1), 
null));
+Assert.assertFalse(MetaStoreServerUtils.areSameColumns(null, 
Arrays.asList(col1)));
+Assert.assertTrue(MetaStoreServerUtils.areSameColumns(Arrays.asList(col1), 
Arrays.asList(col1)));
+Assert.assertTrue(MetaStoreServerUtils.areSameColumns(Arrays.asList(col1, 
col2), Arrays.asList(col1, col2)));
+Assert.assertTrue(MetaStoreServerUtils.areSameColumns(Arrays.asList(Col1, 
col2

[hive] branch master updated: Disable flaky test

2022-06-29 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 56c336268e Disable flaky test
56c336268e is described below

commit 56c336268ea8c281d23c22d89271af37cb7e2572
Author: Peter Vary 
AuthorDate: Wed Jun 29 09:32:45 2022 +0200

Disable flaky test
---
 ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java
index 5d5f68700c..8ce58bb45c 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java
@@ -1260,6 +1260,7 @@ public class TestWorkloadManager {
 assertEquals("B", sessionA4.getPoolName());
   }
 
+  @org.junit.Ignore("HIVE-26364")
   @Test(timeout=1)
   public void testAsyncSessionInitFailures() throws Exception {
 final HiveConf conf = createConf();



[hive] branch master updated: HIVE-26358: Querying metadata tables does not work for Iceberg tables using HADOOP_TABLE (Peter Vary reviewed by Laszlo Pinter) (#3408)

2022-06-29 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ba87754a94 HIVE-26358: Querying metadata tables does not work for 
Iceberg tables using HADOOP_TABLE (Peter Vary reviewed by Laszlo Pinter) (#3408)
ba87754a94 is described below

commit ba87754a942912928d9d59fb94db307ef85808b2
Author: pvary 
AuthorDate: Wed Jun 29 09:16:52 2022 +0200

HIVE-26358: Querying metadata tables does not work for Iceberg tables using 
HADOOP_TABLE (Peter Vary reviewed by Laszlo Pinter) (#3408)
---
 .../java/org/apache/iceberg/mr/hive/IcebergTableUtil.java  |  4 
 .../org/apache/iceberg/mr/hive/TestHiveIcebergSelects.java | 14 ++
 .../java/org/apache/hadoop/hive/ql/stats/StatsUtils.java   |  3 ++-
 3 files changed, 20 insertions(+), 1 deletion(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/IcebergTableUtil.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/IcebergTableUtil.java
index 6e471f7be3..3fe2eee39d 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/IcebergTableUtil.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/IcebergTableUtil.java
@@ -77,8 +77,12 @@ public class IcebergTableUtil {
   static Table getTable(Configuration configuration, Properties properties) {
 String metaTable = properties.getProperty("metaTable");
 String tableName = properties.getProperty(Catalogs.NAME);
+String location = properties.getProperty(Catalogs.LOCATION);
 if (metaTable != null) {
+  // HiveCatalog, HadoopCatalog uses NAME to identify the metadata table
   properties.setProperty(Catalogs.NAME, tableName + "." + metaTable);
+  // HadoopTable uses LOCATION to identify the metadata table
+  properties.setProperty(Catalogs.LOCATION, location + "#" + metaTable);
 }
 
 String tableIdentifier = properties.getProperty(Catalogs.NAME);
diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSelects.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSelects.java
index a9c692d12e..6ab6e3e4ff 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSelects.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSelects.java
@@ -24,6 +24,7 @@ import java.util.List;
 import java.util.stream.Collectors;
 import org.apache.iceberg.FileFormat;
 import org.apache.iceberg.Schema;
+import org.apache.iceberg.Table;
 import org.apache.iceberg.catalog.TableIdentifier;
 import org.apache.iceberg.data.Record;
 import org.apache.iceberg.mr.InputFormatConfig;
@@ -263,4 +264,17 @@ public class TestHiveIcebergSelects extends 
HiveIcebergStorageHandlerWithEngineB
 Assert.assertEquals(2, result.size());
 
   }
+
+  @Test
+  public void testHistory() throws IOException, InterruptedException {
+TableIdentifier identifier = TableIdentifier.of("default", "source");
+Table table = testTables.createTableWithVersions(shell, identifier.name(),
+HiveIcebergStorageHandlerTestUtils.CUSTOMER_SCHEMA, fileFormat,
+HiveIcebergStorageHandlerTestUtils.CUSTOMER_RECORDS, 1);
+List history = shell.executeStatement("SELECT snapshot_id FROM 
default.source.history");
+Assert.assertEquals(table.history().size(), history.size());
+for (int i = 0; i < table.history().size(); ++i) {
+  Assert.assertEquals(table.history().get(i).snapshotId(), 
history.get(i)[0]);
+}
+  }
 }
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java
index 56b3843c00..f493bfebc6 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java
@@ -262,6 +262,7 @@ public class StatsUtils {
 boolean fetchColStats =
 HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.HIVE_STATS_FETCH_COLUMN_STATS);
 boolean estimateStats = HiveConf.getBoolVar(conf, 
ConfVars.HIVE_STATS_ESTIMATE_STATS);
+boolean metaTable = table.getMetaTable() != null;
 
 if (!table.isPartitioned()) {
 
@@ -285,7 +286,7 @@ public class StatsUtils {
 
   long numErasureCodedFiles = getErasureCodedFiles(table);
 
-  if (needColStats) {
+  if (needColStats && !metaTable) {
 colStats = getTableColumnStats(table, schema, neededColumns, 
colStatsCache, fetchColStats);
 if (estimateStats) {
   estimateStatsForMissingCols(neededColumns, colStats, table, conf, 
nr, schema);



[hive] branch master updated: HIVE-26354: Support expiring snapshots on iceberg table (Peter Vary reviewed by Laszlo Pinter) (#3401)

2022-06-28 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new e0f1baaa59 HIVE-26354: Support expiring snapshots on iceberg table 
(Peter Vary reviewed by Laszlo Pinter) (#3401)
e0f1baaa59 is described below

commit e0f1baaa59153cc260f0a50f04218a450cbdc1b8
Author: pvary 
AuthorDate: Tue Jun 28 11:55:06 2022 +0200

HIVE-26354: Support expiring snapshots on iceberg table (Peter Vary 
reviewed by Laszlo Pinter) (#3401)
---
 .../iceberg/mr/hive/HiveIcebergStorageHandler.java | 11 +-
 .../mr/hive/TestHiveIcebergExpireSnapshots.java| 45 ++
 .../hadoop/hive/ql/parse/AlterClauseParser.g   |  2 +
 .../apache/hadoop/hive/ql/parse/HiveLexerParent.g  |  1 +
 .../hadoop/hive/ql/parse/IdentifiersParser.g   |  1 +
 .../table/execute/AlterTableExecuteAnalyzer.java   | 17 +++-
 .../hive/ql/parse/AlterTableExecuteSpec.java   | 29 +-
 7 files changed, 100 insertions(+), 6 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java
index 74d75f5741..dcec3468ad 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java
@@ -454,10 +454,10 @@ public class HiveIcebergStorageHandler implements 
HiveStoragePredicateHandler, H
 
   @Override
   public void executeOperation(org.apache.hadoop.hive.ql.metadata.Table 
hmsTable, AlterTableExecuteSpec executeSpec) {
+TableDesc tableDesc = Utilities.getTableDesc(hmsTable);
+Table icebergTable = IcebergTableUtil.getTable(conf, 
tableDesc.getProperties());
 switch (executeSpec.getOperationType()) {
   case ROLLBACK:
-TableDesc tableDesc = Utilities.getTableDesc(hmsTable);
-Table icebergTable = IcebergTableUtil.getTable(conf, 
tableDesc.getProperties());
 LOG.info("Executing rollback operation on iceberg table. If you would 
like to revert rollback you could " +
   "try altering the metadata location to the current metadata 
location by executing the following query:" +
   "ALTER TABLE {}.{} SET TBLPROPERTIES('metadata_location'='{}'). 
This operation is supported for Hive " +
@@ -467,6 +467,13 @@ public class HiveIcebergStorageHandler implements 
HiveStoragePredicateHandler, H
 (AlterTableExecuteSpec.RollbackSpec) 
executeSpec.getOperationParams();
 IcebergTableUtil.rollback(icebergTable, 
rollbackSpec.getRollbackType(), rollbackSpec.getParam());
 break;
+  case EXPIRE_SNAPSHOT:
+LOG.info("Executing expire snapshots operation on iceberg table 
{}.{}", hmsTable.getDbName(),
+hmsTable.getTableName());
+AlterTableExecuteSpec.ExpireSnapshotsSpec expireSnapshotsSpec =
+(AlterTableExecuteSpec.ExpireSnapshotsSpec) 
executeSpec.getOperationParams();
+
icebergTable.expireSnapshots().expireOlderThan(expireSnapshotsSpec.getTimestampMillis()).commit();
+break;
   default:
 throw new UnsupportedOperationException(
 String.format("Operation type %s is not supported", 
executeSpec.getOperationType().name()));
diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergExpireSnapshots.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergExpireSnapshots.java
new file mode 100644
index 00..3b17e2402b
--- /dev/null
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergExpireSnapshots.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.mr.hive;
+
+import java.io.IOException;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.catalog.TableIdentifier;
+import org.junit.Assert;
+import org.j

[hive] branch master updated: HIVE-26265: Option to filter out Txn events during replication. (Francis Pang reviewed by Peter Vary) (#3365)

2022-06-28 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 381d8bfd6a HIVE-26265: Option to filter out Txn events during 
replication. (Francis Pang reviewed by Peter Vary) (#3365)
381d8bfd6a is described below

commit 381d8bfd6a2a4f3715eca1cce1b689d3237c4142
Author: cmunkey <49877369+cmun...@users.noreply.github.com>
AuthorDate: Tue Jun 28 00:48:22 2022 -0700

HIVE-26265: Option to filter out Txn events during replication. (Francis 
Pang reviewed by Peter Vary) (#3365)

Purpose: Currently, all Txn events OpenTxn, CommitTxn, and RollbackTxn are 
included in the REPL DUMP, even when the transcation does not involve the 
database being dumped
(replicated). These events are unnecessary and result is excessive space 
required for the dump, as well as increasing work that results from these 
events being replayed
during REPL LOAD.

Solution proposed: To reduce this unnecessary space and work, added the 
hive.repl.filter.transactions configuration property. When set to "true", extra 
Txn events
will be filtered out as follows: CommitTxn and RollbackTxn are included in 
the REPL DUMP only if the transaction referenced had a corresponding 
ALLOCATE_WRITE_ID event
that was dumped. OpenTxn is never dumped, and the OpenTxn event will be 
implcitly Opened when REPL LOAD processes the ALLOC_WRITE_ID event, since the 
ALLOC_WRITE_ID
contains the open transaction ids. The default setting is "false".

Co-authored-by: Francis Pang 
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |   4 +
 .../hcatalog/listener/DbNotificationListener.java  |   2 +-
 .../hadoop/hive/ql/parse/ReplicationTestUtils.java |  50 +-
 .../parse/TestReplicationFilterTransactions.java   | 514 +
 .../apache/hadoop/hive/ql/exec/ReplTxnTask.java|  15 +
 .../hadoop/hive/ql/exec/repl/util/ReplUtils.java   |   5 +
 .../ql/parse/repl/dump/events/AbortTxnHandler.java |  18 +
 .../parse/repl/dump/events/CommitTxnHandler.java   |   7 +
 .../ql/parse/repl/dump/events/OpenTxnHandler.java  |   5 +
 .../apache/hadoop/hive/metastore/HMSHandler.java   |   4 +
 .../hive/metastore/events/AbortTxnEvent.java   |  23 +-
 .../hive/metastore/messaging/AbortTxnMessage.java  |   3 +
 .../hive/metastore/messaging/MessageBuilder.java   |   4 +-
 .../messaging/json/JSONAbortTxnMessage.java|  13 +-
 .../hadoop/hive/metastore/txn/TxnHandler.java  |  40 +-
 .../events/TestAbortTxnEventDbsUpdated.java|  68 +++
 16 files changed, 764 insertions(+), 11 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 2a68674e59..67cfef75a3 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -3652,6 +3652,10 @@ public class HiveConf extends Configuration {
 "org.apache.hive.hcatalog.api.repl.exim.EximReplicationTaskFactory",
 "Parameter that can be used to override which ReplicationTaskFactory 
will be\n" +
 "used to instantiate ReplicationTask events. Override for third party 
repl plugins"),
+REPL_FILTER_TRANSACTIONS("hive.repl.filter.transactions", false,
+"Enable transaction event filtering to save dump space.\n" +
+"When true, transactions are implicitly opened during REPL 
DUMP.\n" +
+"The default setting is false"),
 
HIVE_MAPPER_CANNOT_SPAN_MULTIPLE_PARTITIONS("hive.mapper.cannot.span.multiple.partitions",
 false, ""),
 HIVE_REWORK_MAPREDWORK("hive.rework.mapredwork", false,
 "should rework the mapred work or not.\n" +
diff --git 
a/hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
 
b/hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
index 85a94a7fc4..d66add1591 100644
--- 
a/hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
+++ 
b/hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
@@ -666,7 +666,7 @@ public class DbNotificationListener extends 
TransactionalMetaStoreEventListener
   return;
 }
 AbortTxnMessage msg =
-
MessageBuilder.getInstance().buildAbortTxnMessage(abortTxnEvent.getTxnId());
+
MessageBuilder.getInstance().buildAbortTxnMessage(abortTxnEvent.getTxnId(), 
abortTxnEvent.getDbsUpdated());
 NotificationEvent event =
 new NotificationEvent(0, now(), EventType.ABORT_TXN.toString(),
 msgEncoder.getSerializer().serialize(msg

[hive] branch master updated: HIVE-25980: Reduce fs calls in HiveMetaStoreChecker.checkTable (Chiran Ravani reviewed by Syed Shameerur Rahman and Peter Vary)(#3053)

2022-06-25 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ae55a3049b4 HIVE-25980: Reduce fs calls in 
HiveMetaStoreChecker.checkTable (Chiran Ravani reviewed by Syed Shameerur 
Rahman and Peter Vary)(#3053)
ae55a3049b4 is described below

commit ae55a3049b4100cea92ec4ac6374a8bc0f16e4a6
Author: Chiran Ravani 
AuthorDate: Sat Jun 25 05:34:52 2022 -0400

HIVE-25980: Reduce fs calls in HiveMetaStoreChecker.checkTable (Chiran 
Ravani reviewed by Syed Shameerur Rahman and Peter Vary)(#3053)
---
 .../clientpositive/msck_repair_hive_25980.q|  33 +++
 .../llap/msck_repair_hive_25980.q.out  |  86 +
 .../hive/metastore/HiveMetaStoreChecker.java   | 105 ++---
 3 files changed, 169 insertions(+), 55 deletions(-)

diff --git a/ql/src/test/queries/clientpositive/msck_repair_hive_25980.q 
b/ql/src/test/queries/clientpositive/msck_repair_hive_25980.q
new file mode 100644
index 000..a769ad7dafb
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/msck_repair_hive_25980.q
@@ -0,0 +1,33 @@
+DROP TABLE IF EXISTS repairtable_hive_25980;
+
+CREATE TABLE repairtable_hive_25980(id int, name string) partitioned by(year 
int,month int);
+
+MSCK REPAIR TABLE repairtable_hive_25980;
+
+SHOW PARTITIONS repairtable_hive_25980;
+
+dfs ${system:test.dfs.mkdir} 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2022/month=01;
+dfs ${system:test.dfs.mkdir} 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2022/month=03;
+dfs ${system:test.dfs.mkdir} 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2022/month=04;
+dfs ${system:test.dfs.mkdir} 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2021/month=02;
+dfs ${system:test.dfs.mkdir} 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2021/month=01;
+dfs ${system:test.dfs.mkdir} 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2021/month=03;
+
+MSCK REPAIR TABLE repairtable_hive_25980;
+
+SHOW PARTITIONS repairtable_hive_25980;
+
+dfs -rmdir 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2021/month=02;
+dfs -rmdir 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2021/month=01;
+dfs -rmdir 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2021/month=03;
+dfs -rmdir ${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2021;
+dfs ${system:test.dfs.mkdir} 
file:///tmp/repairtable_hive_25980_external_dir/year=2022/month=02;
+dfs ${system:test.dfs.mkdir} 
file:///tmp/repairtable_hive_25980_external_dir/year=2021/month=04;
+dfs ${system:test.dfs.mkdir} 
${system:test.local.warehouse.dir}/repairtable_hive_25980/year=2022/month=12;
+
+alter table repairtable_hive_25980 add partition(year=2022,month=02) location 
'file:///tmp/repairtable_hive_25980_external_dir/year=2022/month=02';
+alter table repairtable_hive_25980 add partition(year=2021,month=04) location 
'file:///tmp/repairtable_hive_25980_external_dir/year=2021/month=04';
+
+MSCK REPAIR TABLE repairtable_hive_25980 SYNC PARTITIONS;
+
+SHOW PARTITIONS repairtable_hive_25980;
diff --git 
a/ql/src/test/results/clientpositive/llap/msck_repair_hive_25980.q.out 
b/ql/src/test/results/clientpositive/llap/msck_repair_hive_25980.q.out
new file mode 100644
index 000..cc3b799e5d4
--- /dev/null
+++ b/ql/src/test/results/clientpositive/llap/msck_repair_hive_25980.q.out
@@ -0,0 +1,86 @@
+PREHOOK: query: DROP TABLE IF EXISTS repairtable_hive_25980
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: DROP TABLE IF EXISTS repairtable_hive_25980
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: CREATE TABLE repairtable_hive_25980(id int, name string) 
partitioned by(year int,month int)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@repairtable_hive_25980
+POSTHOOK: query: CREATE TABLE repairtable_hive_25980(id int, name string) 
partitioned by(year int,month int)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@repairtable_hive_25980
+PREHOOK: query: MSCK REPAIR TABLE repairtable_hive_25980
+PREHOOK: type: MSCK
+PREHOOK: Output: default@repairtable_hive_25980
+POSTHOOK: query: MSCK REPAIR TABLE repairtable_hive_25980
+POSTHOOK: type: MSCK
+POSTHOOK: Output: default@repairtable_hive_25980
+PREHOOK: query: SHOW PARTITIONS repairtable_hive_25980
+PREHOOK: type: SHOWPARTITIONS
+PREHOOK: Input: default@repairtable_hive_25980
+POSTHOOK: query: SHOW PARTITIONS repairtable_hive_25980
+POSTHOOK: type: SHOWPARTITIONS
+POSTHOOK: Input: default@repairtable_hive_25980
+PREHOOK: query: MSCK REPAIR TABLE repairtable_hive_25980
+PREHOOK: type: MSCK
+PREHOOK: Output: default@repairtable_hive_25980
+POSTHOOK: query: MSCK REPAIR TABLE repairtable_hive_25980
+POSTHOOK: type: MSCK
+POSTHOOK: Output: default

[hive] branch master updated: HIVE-26334: Remove misleading bucketing info from DESCRIBE FORMATTED output for Iceberg tables (Peter Vary reviewed by Laszlo Pinter) (#3378)

2022-06-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 8f1a5b6854d HIVE-26334: Remove misleading bucketing info from DESCRIBE 
FORMATTED output for Iceberg tables (Peter Vary reviewed by Laszlo Pinter) 
(#3378)
8f1a5b6854d is described below

commit 8f1a5b6854daf5fb4814632f9356ef5fe7bb6964
Author: pvary 
AuthorDate: Wed Jun 22 15:24:05 2022 +0200

HIVE-26334: Remove misleading bucketing info from DESCRIBE FORMATTED output 
for Iceberg tables (Peter Vary reviewed by Laszlo Pinter) (#3378)
---
 .../positive/alter_multi_part_table_to_iceberg.q.out   |  6 --
 .../results/positive/alter_part_table_to_iceberg.q.out |  6 --
 .../test/results/positive/alter_table_to_iceberg.q.out |  6 --
 .../test/results/positive/create_iceberg_table.q.out   |  2 --
 .../create_iceberg_table_stored_as_fileformat.q.out| 10 --
 .../create_iceberg_table_stored_by_iceberg.q.out   |  2 --
 ..._table_stored_by_iceberg_with_serdeproperties.q.out |  2 --
 .../test/results/positive/describe_iceberg_table.q.out |  8 
 .../positive/truncate_force_iceberg_table.q.out|  4 
 .../test/results/positive/truncate_iceberg_table.q.out | 10 --
 .../positive/truncate_partitioned_iceberg_table.q.out  |  4 
 .../info/desc/formatter/TextDescTableFormatter.java| 18 +++---
 12 files changed, 11 insertions(+), 67 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/test/results/positive/alter_multi_part_table_to_iceberg.q.out
 
b/iceberg/iceberg-handler/src/test/results/positive/alter_multi_part_table_to_iceberg.q.out
index fa26ad2a650..5f7af3f68cf 100644
--- 
a/iceberg/iceberg-handler/src/test/results/positive/alter_multi_part_table_to_iceberg.q.out
+++ 
b/iceberg/iceberg-handler/src/test/results/positive/alter_multi_part_table_to_iceberg.q.out
@@ -213,8 +213,6 @@ SerDe Library:  
org.apache.iceberg.mr.hive.HiveIcebergSerDe
 InputFormat:   org.apache.iceberg.mr.hive.HiveIcebergInputFormat   
 
 OutputFormat:  org.apache.iceberg.mr.hive.HiveIcebergOutputFormat  
 
 Compressed:No   
-Num Buckets:   0
-Bucket Columns:[]   
 Sort Columns:  []   
 PREHOOK: query: select * from tbl_orc order by a
 PREHOOK: type: QUERY
@@ -462,8 +460,6 @@ SerDe Library:  
org.apache.iceberg.mr.hive.HiveIcebergSerDe
 InputFormat:   org.apache.iceberg.mr.hive.HiveIcebergInputFormat   
 
 OutputFormat:  org.apache.iceberg.mr.hive.HiveIcebergOutputFormat  
 
 Compressed:No   
-Num Buckets:   0
-Bucket Columns:[]   
 Sort Columns:  []   
 PREHOOK: query: select * from tbl_parquet order by a
 PREHOOK: type: QUERY
@@ -711,8 +707,6 @@ SerDe Library:  
org.apache.iceberg.mr.hive.HiveIcebergSerDe
 InputFormat:   org.apache.iceberg.mr.hive.HiveIcebergInputFormat   
 
 OutputFormat:  org.apache.iceberg.mr.hive.HiveIcebergOutputFormat  
 
 Compressed:No   
-Num Buckets:   0
-Bucket Columns:[]   
 Sort Columns:  []   
 PREHOOK: query: select * from tbl_avro order by a
 PREHOOK: type: QUERY
diff --git 
a/iceberg/iceberg-handler/src/test/results/positive/alter_part_table_to_iceberg.q.out
 
b/iceberg/iceberg-handler/src/test/results/positive/alter_part_table_to_iceberg.q.out
index caab6dfa829..ca580ca464b 100644
--- 
a/iceberg/iceberg-handler/src/test/results/positive/alter_part_table_to_iceberg.q.out
+++ 
b/iceberg/iceberg-handler/src/test/results/positive/alter_part_table_to_iceberg.q.out
@@ -168,8 +168,6 @@ SerDe Library:  
org.apache.iceberg.mr.hive.HiveIcebergSerDe
 InputFormat:   org.apache.iceberg.mr.hive.HiveIcebergInputFormat   
 
 OutputFormat:  org.apache.iceberg.mr.hive.HiveIcebergOutputFormat  
 
 Compressed:No   
-Num Buckets:   0
-Bucket Columns:[]   
 Sort Columns:  []   
 PREHOOK: query: select * from tbl_orc order by a
 PREHOOK: type: QUERY
@@ -366,8 +364,6 @@ SerDe Library:  
org.apache.iceberg.mr.hive.HiveIcebergSerDe
 InputFormat:   org.apache.iceberg.mr.hive.HiveIcebergInputFormat   
 
 OutputFormat:  org.apache.iceberg.mr.hive.HiveIcebergOutputFormat  
 
 Compressed:No   
-Num Buckets:   0
-Bucket Columns:[]   
 Sort Columns

[hive] branch master updated: HIVE-26316: Handle dangling open txns on both src & tgt in unplanned failover. (Haymant Mangla reviewed by Peter Vary) (#3367)

2022-06-16 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 67c2d4910ff HIVE-26316: Handle dangling open txns on both src & tgt in 
unplanned failover. (Haymant Mangla reviewed by Peter Vary) (#3367)
67c2d4910ff is described below

commit 67c2d4910ff17c694653eb8bd9c9ed2405cec38b
Author: Haymant Mangla <79496857+hmangl...@users.noreply.github.com>
AuthorDate: Thu Jun 16 15:11:22 2022 +0530

HIVE-26316: Handle dangling open txns on both src & tgt in unplanned 
failover. (Haymant Mangla reviewed by Peter Vary) (#3367)
---
 .../parse/TestReplicationOptimisedBootstrap.java   | 141 -
 .../hive/ql/exec/repl/OptimisedBootstrapUtils.java |  92 --
 .../hadoop/hive/ql/exec/repl/ReplDumpTask.java |  77 +--
 .../hadoop/hive/ql/exec/repl/ReplLoadTask.java |  41 +-
 .../hadoop/hive/ql/exec/repl/util/ReplUtils.java   |  61 +
 .../repl/dump/events/AbstractEventHandler.java |  11 +-
 6 files changed, 349 insertions(+), 74 deletions(-)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
index 673e41b3065..dd6821dc578 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
@@ -24,12 +24,16 @@ import org.apache.hadoop.fs.QuotaUsage;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.api.AbortTxnsRequest;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.TxnType;
 import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 import 
org.apache.hadoop.hive.metastore.messaging.json.gzip.GzipJSONMessageEncoder;
 import org.apache.hadoop.hive.metastore.txn.TxnStore;
 import org.apache.hadoop.hive.metastore.txn.TxnUtils;
+import org.apache.hadoop.hive.ql.exec.repl.OptimisedBootstrapUtils;
 import org.apache.hadoop.hive.ql.exec.repl.util.ReplUtils;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.HiveUtils;
 import org.apache.hadoop.security.UserGroupInformation;
 
 import org.jetbrains.annotations.NotNull;
@@ -55,7 +59,6 @@ import static 
org.apache.hadoop.hive.metastore.ReplChangeManager.SOURCE_OF_REPLI
 import static 
org.apache.hadoop.hive.ql.exec.repl.OptimisedBootstrapUtils.EVENT_ACK_FILE;
 import static 
org.apache.hadoop.hive.ql.exec.repl.OptimisedBootstrapUtils.TABLE_DIFF_COMPLETE_DIRECTORY;
 import static 
org.apache.hadoop.hive.ql.exec.repl.OptimisedBootstrapUtils.TABLE_DIFF_INPROGRESS_DIRECTORY;
-import static 
org.apache.hadoop.hive.ql.exec.repl.OptimisedBootstrapUtils.getEventIdFromFile;
 import static 
org.apache.hadoop.hive.ql.exec.repl.OptimisedBootstrapUtils.getPathsFromTableFile;
 import static 
org.apache.hadoop.hive.ql.exec.repl.OptimisedBootstrapUtils.getTablesFromTableDiffFile;
 
@@ -71,6 +74,10 @@ import static org.junit.Assert.fail;
 public class TestReplicationOptimisedBootstrap extends 
BaseReplicationScenariosAcidTables {
 
   String extraPrimaryDb;
+  HiveConf primaryConf;
+  TxnStore txnHandler;
+  List tearDownTxns = new ArrayList<>();
+  List tearDownLockIds = new ArrayList<>();
 
   @BeforeClass
   public static void classLevelSetup() throws Exception {
@@ -90,10 +97,19 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationScenariosA
   public void setup() throws Throwable {
 super.setup();
 extraPrimaryDb = "extra_" + primaryDbName;
+primaryConf = primary.getConf();
+txnHandler = TxnUtils.getTxnStore(primary.getConf());
   }
 
   @After
   public void tearDown() throws Throwable {
+if (!tearDownTxns.isEmpty()) {
+  //Abort the left out transactions which might not be completed due to 
some test failures.
+  txnHandler.abortTxns(new AbortTxnsRequest(tearDownTxns));
+}
+//Release the unreleased locks acquired during tests. Although, we 
specifically release the locks when not required.
+//But there may be case when test failed and locks are left in dangling 
state.
+releaseLocks(txnHandler, tearDownLockIds);
 primary.run("drop database if exists " + extraPrimaryDb + " cascade");
 super.tearDown();
   }
@@ -468,47 +484,56 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationScenariosA
 
   @Test
   public void testReverseBootstrap() throws Throwable {
-HiveConf primaryConf = primary.getConf();
-TxnStore txnHandler = TxnUtils.getTxnStore(primary.getConf());
  

[hive] branch master updated: HIVE-25733: Add check-spelling/check-spelling (#2809) (Josh Soref reviewed by Zoltan Haindrich) (Addendum)

2022-06-15 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 0b4e466866f HIVE-25733: Add check-spelling/check-spelling (#2809) 
(Josh Soref reviewed by Zoltan Haindrich) (Addendum)
0b4e466866f is described below

commit 0b4e466866fe07a160b0e4b0c27d2b3fb7613c45
Author: Peter Vary 
AuthorDate: Wed Jun 15 10:48:01 2022 +0200

HIVE-25733: Add check-spelling/check-spelling (#2809) (Josh Soref reviewed 
by Zoltan Haindrich) (Addendum)
---
 .github/actions/spelling/expect.txt | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/.github/actions/spelling/expect.txt 
b/.github/actions/spelling/expect.txt
index 39b9d7fc583..98ab7679399 100644
--- a/.github/actions/spelling/expect.txt
+++ b/.github/actions/spelling/expect.txt
@@ -21,6 +21,7 @@ ANull
 anullint
 anullstring
 aoig
+api
 arecord
 args
 arraycopy
@@ -124,6 +125,7 @@ eoi
 EQVR
 Escapables
 ESCAPECHAR
+esri
 ETX
 etype
 facebook
@@ -434,6 +436,7 @@ vlong
 voi
 vtype
 wiki
+wkid
 workaround
 writables
 www



[hive] branch master updated: HIVE-26307: Avoid FS init in FileIO::newInputFile in vectorized Iceberg reads (Peter Vary reviewed by Adam Szita) (#3354)

2022-06-13 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 76d4abe402a HIVE-26307: Avoid FS init in FileIO::newInputFile in 
vectorized Iceberg reads (Peter Vary reviewed by Adam Szita) (#3354)
76d4abe402a is described below

commit 76d4abe402abdebb6534e9db3f4209cce8b0d4e6
Author: pvary 
AuthorDate: Mon Jun 13 17:20:53 2022 +0200

HIVE-26307: Avoid FS init in FileIO::newInputFile in vectorized Iceberg 
reads (Peter Vary reviewed by Adam Szita) (#3354)
---
 .../mr/hive/vector/HiveVectorizedReader.java   |  14 +-
 .../iceberg/mr/mapreduce/IcebergInputFormat.java   | 148 ++---
 .../apache/iceberg/orc/VectorizedReadUtils.java|  14 +-
 3 files changed, 77 insertions(+), 99 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/vector/HiveVectorizedReader.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/vector/HiveVectorizedReader.java
index 19fa0f06506..00b9b3c73f0 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/vector/HiveVectorizedReader.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/vector/HiveVectorizedReader.java
@@ -51,7 +51,6 @@ import org.apache.iceberg.PartitionSpec;
 import org.apache.iceberg.Schema;
 import org.apache.iceberg.io.CloseableIterable;
 import org.apache.iceberg.io.CloseableIterator;
-import org.apache.iceberg.io.InputFile;
 import org.apache.iceberg.mr.mapred.MapredIcebergInputFormat;
 import org.apache.iceberg.orc.VectorizedReadUtils;
 import org.apache.iceberg.parquet.ParquetSchemaUtil;
@@ -72,11 +71,10 @@ public class HiveVectorizedReader {
 
   }
 
-  public static  CloseableIterable reader(InputFile inputFile, 
FileScanTask task, Map idToConstant,
+  public static  CloseableIterable reader(Path path, FileScanTask task, 
Map idToConstant,
   TaskAttemptContext context) {
 // Tweaks on jobConf here are relevant for this task only, so we need to 
copy it first as context's conf is reused..
-JobConf job = new JobConf((JobConf) context.getConfiguration());
-Path path = new Path(inputFile.location());
+JobConf job = new JobConf(context.getConfiguration());
 FileFormat format = task.file().format();
 Reporter reporter = 
((MapredIcebergInputFormat.CompatibilityTaskAttemptContextImpl) 
context).getLegacyReporter();
 
@@ -131,7 +129,7 @@ public class HiveVectorizedReader {
 
   switch (format) {
 case ORC:
-  recordReader = orcRecordReader(job, reporter, task, inputFile, path, 
start, length, readColumnIds, fileId);
+  recordReader = orcRecordReader(job, reporter, task, path, start, 
length, readColumnIds, fileId);
   break;
 
 case PARQUET:
@@ -144,12 +142,12 @@ public class HiveVectorizedReader {
   return createVectorizedRowBatchIterable(recordReader, job, 
partitionColIndices, partitionValues);
 
 } catch (IOException ioe) {
-  throw new RuntimeException("Error creating vectorized record reader for 
" + inputFile, ioe);
+  throw new RuntimeException("Error creating vectorized record reader for 
" + path, ioe);
 }
   }
 
   private static RecordReader 
orcRecordReader(JobConf job, Reporter reporter,
-  FileScanTask task, InputFile inputFile, Path path, long start, long 
length, List readColumnIds,
+  FileScanTask task, Path path, long start, long length, List 
readColumnIds,
   SyntheticFileId fileId) throws IOException {
 RecordReader recordReader = null;
 
@@ -159,7 +157,7 @@ public class HiveVectorizedReader {
 
 // Metadata information has to be passed along in the OrcSplit. Without 
specifying this, the vectorized
 // reader will assume that the ORC file ends at the task's start + length, 
and might fail reading the tail..
-ByteBuffer serializedOrcTail = 
VectorizedReadUtils.getSerializedOrcTail(inputFile, fileId, job);
+ByteBuffer serializedOrcTail = 
VectorizedReadUtils.getSerializedOrcTail(path, fileId, job);
 OrcTail orcTail = 
VectorizedReadUtils.deserializeToOrcTail(serializedOrcTail);
 
 VectorizedReadUtils.handleIcebergProjection(task, job,
diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/mapreduce/IcebergInputFormat.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/mapreduce/IcebergInputFormat.java
index 7bbd03a09ed..7617c6b17e9 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/mapreduce/IcebergInputFormat.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/mapreduce/IcebergInputFormat.java
@@ -31,6 +31,7 @@ import java.util.function.BiFunction;
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.had

[hive] branch master updated: HIVE-26301: Fix ACID tables bootstrap during reverse replication in unplanned failover (Haymant Mangla reviewed by Peter Vary) (#3352)

2022-06-10 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new fe0f1a648b1 HIVE-26301: Fix ACID tables bootstrap during reverse 
replication in unplanned failover (Haymant Mangla reviewed by Peter Vary) 
(#3352)
fe0f1a648b1 is described below

commit fe0f1a648b14cdf27edcf7a5d323cbd060104ebf
Author: Haymant Mangla <79496857+hmangl...@users.noreply.github.com>
AuthorDate: Fri Jun 10 16:06:58 2022 +0530

HIVE-26301: Fix ACID tables bootstrap during reverse replication in 
unplanned failover (Haymant Mangla reviewed by Peter Vary) (#3352)
---
 .../parse/TestReplicationOptimisedBootstrap.java   | 360 -
 .../TestReplicationScenariosExclusiveReplica.java  | 292 -
 .../hadoop/hive/ql/exec/repl/ReplDumpTask.java |   5 +-
 3 files changed, 349 insertions(+), 308 deletions(-)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
index 5bd6ac3d362..673e41b3065 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
@@ -23,14 +23,11 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.QuotaUsage;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hive.conf.HiveConf;
-import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
-import org.apache.hadoop.hive.metastore.InjectableBehaviourObjectStore;
-import org.apache.hadoop.hive.metastore.api.CurrentNotificationEventId;
-import org.apache.hadoop.hive.metastore.api.NotificationEvent;
-import org.apache.hadoop.hive.metastore.api.NotificationEventResponse;
+import org.apache.hadoop.hive.metastore.api.AbortTxnsRequest;
 import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
-import 
org.apache.hadoop.hive.metastore.messaging.event.filters.DatabaseAndTableFilter;
 import 
org.apache.hadoop.hive.metastore.messaging.json.gzip.GzipJSONMessageEncoder;
+import org.apache.hadoop.hive.metastore.txn.TxnStore;
+import org.apache.hadoop.hive.metastore.txn.TxnUtils;
 import org.apache.hadoop.hive.ql.exec.repl.util.ReplUtils;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -71,7 +68,7 @@ import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
-public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInstances {
+public class TestReplicationOptimisedBootstrap extends 
BaseReplicationScenariosAcidTables {
 
   String extraPrimaryDb;
 
@@ -84,8 +81,9 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 overrides.put(HiveConf.ConfVars.REPL_INCLUDE_EXTERNAL_TABLES.varname, 
"true");
 overrides.put(HiveConf.ConfVars.HIVE_DISTCP_DOAS_USER.varname, 
UserGroupInformation.getCurrentUser().getUserName());
 
overrides.put(HiveConf.ConfVars.REPL_RUN_DATA_COPY_TASKS_ON_TARGET.varname, 
"true");
-
-internalBeforeClassSetupExclusiveReplica(overrides, overrides, 
TestReplicationOptimisedBootstrap.class);
+overrides.put("hive.repl.bootstrap.dump.open.txn.timeout", "1s");
+overrides.put("hive.in.repl.test", "true");
+internalBeforeClassSetup(overrides, 
TestReplicationOptimisedBootstrap.class);
   }
 
   @Before
@@ -112,7 +110,8 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 .run("create external table t2 (place string) partitioned by (country 
string)")
 .run("insert into table t2 partition(country='india') values 
('chennai')")
 .run("insert into table t2 partition(country='us') values ('new 
york')")
-.run("create table t1_managed (id int)")
+.run("create table t1_managed (id int) clustered by(id) into 3 buckets 
stored as orc " +
+"tblproperties (\"transactional\"=\"true\")")
 .run("insert into table t1_managed values (10)")
 .run("insert into table t1_managed values (20),(31),(42)")
 .run("create table t2_managed (place string) partitioned by (country 
string)")
@@ -125,14 +124,8 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 .run("repl status " + replicatedDbName)
 .verifyResult(tuple.lastReplicationId)
 .run("use " + replicatedDbName)
-.run("show tables like 't1'")
-.verifyRe

[hive] branch master updated: Disable flaky test

2022-06-09 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ac3e8bae3a6 Disable flaky test
ac3e8bae3a6 is described below

commit ac3e8bae3a62e9ad08471aa13df47c9e8667e8c2
Author: Peter Vary 
AuthorDate: Thu Jun 9 18:10:19 2022 +0200

Disable flaky test
---
 .../hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
index e40a0a6bf9a..727ff3e1a39 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestHostAffinitySplitLocationProvider.java
@@ -168,7 +168,7 @@ public class TestHostAffinitySplitLocationProvider {
 return locations;
   }
 
-
+  @org.junit.Ignore("HIVE-26308")
   @Test (timeout = 2)
   public void testConsistentHashingFallback() throws IOException {
 final int LOC_COUNT_TO = 20, SPLIT_COUNT = 500, MAX_MISS_COUNT = 4,



[hive] branch master updated: HIVE-26285: Overwrite database metadata on original source in optimised failover. (Haymant Mangla reviewed by Denys Kuzmenko and Peter Vary) (#3346)

2022-06-09 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new bd8e4052066 HIVE-26285: Overwrite database metadata on original source 
in optimised failover. (Haymant Mangla reviewed by Denys Kuzmenko and Peter 
Vary) (#3346)
bd8e4052066 is described below

commit bd8e4052066e0ea9294defd6d4e87094c667b846
Author: Haymant Mangla <79496857+hmangl...@users.noreply.github.com>
AuthorDate: Thu Jun 9 13:11:05 2022 +0530

HIVE-26285: Overwrite database metadata on original source in optimised 
failover. (Haymant Mangla reviewed by Denys Kuzmenko and Peter Vary) (#3346)
---
 .../parse/TestReplicationOptimisedBootstrap.java   | 13 +-
 .../hadoop/hive/ql/exec/repl/ReplDumpTask.java |  5 +--
 .../hadoop/hive/ql/exec/repl/ReplLoadTask.java | 50 +++---
 3 files changed, 57 insertions(+), 11 deletions(-)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
index 5ccd74f3708..5bd6ac3d362 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOptimisedBootstrap.java
@@ -753,11 +753,13 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 
 // Do a reverse second dump, this should do a bootstrap dump for the 
tables in the table_diff and incremental for
 // rest.
+
+
assertTrue("value1".equals(primary.getDatabase(primaryDbName).getParameters().get("key1")));
 WarehouseInstance.Tuple tuple = replica.dump(replicatedDbName, withClause);
 
 String hiveDumpDir = tuple.dumpLocation + File.separator + 
ReplUtils.REPL_HIVE_BASE_DIR;
 // _bootstrap directory should be created as bootstrap enabled on external 
tables.
-Path dumpPath1 = new Path(hiveDumpDir, INC_BOOTSTRAP_ROOT_DIR_NAME 
+"/metadata/" + replicatedDbName);
+Path dumpPath1 = new Path(hiveDumpDir, INC_BOOTSTRAP_ROOT_DIR_NAME +"/" + 
EximUtil.METADATA_PATH_NAME +"/" + replicatedDbName);
 FileStatus[] listStatus = 
dumpPath1.getFileSystem(conf).listStatus(dumpPath1);
 ArrayList tablesBootstrapped = new ArrayList();
 for (FileStatus file : listStatus) {
@@ -769,6 +771,8 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 // Do a reverse load, this should do a bootstrap load for the tables in 
table_diff and incremental for the rest.
 primary.load(primaryDbName, replicatedDbName, withClause);
 
+
assertFalse("value1".equals(primary.getDatabase(primaryDbName).getParameters().get("key1")));
+
 primary.run("use " + primaryDbName)
 .run("select id from t1")
 .verifyResults(new String[] { "1", "2", "3", "4", "101", "210", "321" 
})
@@ -898,6 +902,8 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 
 // Check the properties on the new target database.
 assertTrue(targetParams.containsKey(TARGET_OF_REPLICATION));
+assertTrue(targetParams.containsKey(CURR_STATE_ID_TARGET.toString()));
+assertTrue(targetParams.containsKey(CURR_STATE_ID_SOURCE.toString()));
 assertFalse(targetParams.containsKey(SOURCE_OF_REPLICATION));
 
 // Check the properties on the new source database.
@@ -1096,7 +1102,10 @@ public class TestReplicationOptimisedBootstrap extends 
BaseReplicationAcrossInst
 // Do some modifications on original source cluster. The diff 
becomes(tnew_managed, t1, t2, t3)
 primary.run("use " + primaryDbName).run("create table tnew_managed (id 
int)")
 .run("insert into table t1 values (25)").run("insert into table 
tnew_managed values (110)")
-.run("insert into table t2 partition(country='france') values 
('lyon')").run("drop table t3");
+.run("insert into table t2 partition(country='france') values 
('lyon')").run("drop table t3")
+.run("alter database "+ primaryDbName + " set DBPROPERTIES 
('key1'='value1')");
+
+
assertTrue("value1".equals(primary.getDatabase(primaryDbName).getParameters().get("key1")));
 
 // Do some modifications on the target cluster. (t1, t2, t3: bootstrap & 
t4, t5: incremental)
 replica.run("use " + replicatedDbName).run("insert into table t1 values 
(101)")
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/re

[hive] branch master updated: HIVE-25907: IOW Directory queries fails to write data to final path when query result cache is enabled (#2978) (Syed Shameerur Rahman reviewed by Peter Vary)

2022-06-01 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 6626b5564ee HIVE-25907: IOW Directory queries fails to write data to 
final path when query result cache is enabled (#2978) (Syed Shameerur Rahman 
reviewed by Peter Vary)
6626b5564ee is described below

commit 6626b5564ee206db5a656d2f611ed71f10a0ffc1
Author: Syed Shameerur Rahman 
AuthorDate: Wed Jun 1 14:48:37 2022 +0530

HIVE-25907: IOW Directory queries fails to write data to final path when 
query result cache is enabled (#2978) (Syed Shameerur Rahman reviewed by Peter 
Vary)
---
 .../apache/hadoop/hive/ql/parse/QBParseInfo.java   |  11 +
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java |   6 +
 .../clientpositive/insert_overwrite_directory.q|  10 +
 .../llap/insert_overwrite_directory.q.out  | 514 +
 4 files changed, 541 insertions(+)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java
index 3e5a5f18853..7f617774a33 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java
@@ -127,6 +127,9 @@ public class QBParseInfo {
   // used by Windowing
   private final Map> destToWindowingExprs;
 
+  // is the query insert overwrite directory
+  private boolean isInsertOverwriteDir = false;
+
 
   @SuppressWarnings("unused")
   private static final Logger LOG = 
LoggerFactory.getLogger(QBParseInfo.class.getName());
@@ -223,6 +226,10 @@ public class QBParseInfo {
 return insertIntoTables.containsKey(fullTableName.toLowerCase());
   }
 
+  public void setInsertOverwriteDirectory(boolean isInsertOverwriteDir) {
+this.isInsertOverwriteDir = isInsertOverwriteDir;
+  }
+
   public Map getAggregationExprsForClause(String clause) {
 return destToAggregationExprs.get(clause);
   }
@@ -701,6 +708,10 @@ public class QBParseInfo {
 return this.insertIntoTables.size() > 0 || 
this.insertOverwriteTables.size() > 0;
   }
 
+  public boolean isInsertOverwriteDirectory() {
+return isInsertOverwriteDir;
+  }
+
   /**
* Check whether all the expressions in the select clause are aggregate 
function calls.
* This method starts iterating through the AST nodes representing the 
expressions in the select clause stored in
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
index 5cfcb71c882..618c60baa77 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
@@ -1683,6 +1683,9 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
   && ch.getChild(0) instanceof ASTNode) {
 ch = (ASTNode) ch.getChild(0);
 isTmpFileDest = ch.getToken().getType() == HiveParser.TOK_TMP_FILE;
+if (ch.getToken().getType() == HiveParser.StringLiteral) {
+  qbp.setInsertOverwriteDirectory(true);
+}
   } else {
 if (ast.getToken().getType() == HiveParser.TOK_DESTINATION
 && ast.getChild(0).getType() == HiveParser.TOK_TAB) {
@@ -15231,6 +15234,9 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
 if (qb.getParseInfo().hasInsertTables()) {
   return false;
 }
+if (qb.getParseInfo().isInsertOverwriteDirectory()) {
+  return false;
+}
 
 // HIVE-19096 - disable for explain analyze
 return ctx.getExplainAnalyze() == null;
diff --git a/ql/src/test/queries/clientpositive/insert_overwrite_directory.q 
b/ql/src/test/queries/clientpositive/insert_overwrite_directory.q
index 15a00f37089..7b2f0ed2b9c 100644
--- a/ql/src/test/queries/clientpositive/insert_overwrite_directory.q
+++ b/ql/src/test/queries/clientpositive/insert_overwrite_directory.q
@@ -124,6 +124,15 @@ select key,value from rctable;
 
 dfs -cat ../../data/files/rctable_out/00_0;
 
+-- test iow directory when query result cache is enabled
+set hive.query.results.cache.enabled=true;
+insert overwrite directory '../../data/files/iowd_out'
+ROW FORMAT DELIMITED
+FIELDS TERMINATED BY '\t'
+select key,value from rctable;
+
+dfs -cat ../../data/files/iowd_out/00_0;
+
 drop table rctable;
 drop table array_table_n1;
 drop table map_table_n2;
@@ -140,3 +149,4 @@ dfs -rmr ../../data/files/rctable;
 dfs -rmr ../../data/files/rctable_out;
 dfs -rmr ../../data/files/src_table_1;
 dfs -rmr ../../data/files/src_table_2;
+dfs -rmr ../../data/files/iowd_out;
diff --git 
a/ql/src/test/results/clientpositive/llap/insert_overwrite_directory.q.out 
b/ql/src/test/results/clientpositive/llap/insert_overwrite_directory.q.out
index 94483de590e..b8b3cf93b

[hive] branch master updated: HIVE-22670: ArrayIndexOutOfBoundsException when vectorized reader is (#3328) (Ganesha Shreedhara and Abhay Chennagiri reviewed by Peter Vary)

2022-05-31 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 937b165d908 HIVE-22670: ArrayIndexOutOfBoundsException when vectorized 
reader is (#3328) (Ganesha Shreedhara and Abhay Chennagiri reviewed by Peter 
Vary)
937b165d908 is described below

commit 937b165d908229d6b01f3ffaa064cf442de1d9ec
Author: achennagiri <77031092+achennag...@users.noreply.github.com>
AuthorDate: Tue May 31 00:05:49 2022 -0700

HIVE-22670: ArrayIndexOutOfBoundsException when vectorized reader is 
(#3328) (Ganesha Shreedhara and Abhay Chennagiri reviewed by Peter Vary)
---
 data/files/hive22670.parquet   | Bin 0 -> 737 bytes
 .../vector/VectorizedPrimitiveColumnReader.java| 134 +
 .../clientpositive/parquet_vectorization_18.q  |  24 
 .../llap/parquet_vectorization_18.q.out|  74 
 4 files changed, 179 insertions(+), 53 deletions(-)

diff --git a/data/files/hive22670.parquet b/data/files/hive22670.parquet
new file mode 100644
index 000..2700b6fb711
Binary files /dev/null and b/data/files/hive22670.parquet differ
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
index bb08c278668..db52d6a2964 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java
@@ -521,31 +521,37 @@ public class VectorizedPrimitiveColumnReader extends 
BaseVectorizedColumnReader
 switch (primitiveColumnType.getPrimitiveCategory()) {
 case INT:
   for (int i = rowId; i < rowId + num; ++i) {
-((LongColumnVector) column).vector[i] =
-dictionary.readInteger((int) dictionaryIds.vector[i]);
-if (!dictionary.isValid()) {
-  setNullValue(column, i);
-  ((LongColumnVector) column).vector[i] = 0;
+if (!column.isNull[i]) {
+  ((LongColumnVector) column).vector[i] =
+  dictionary.readInteger((int) dictionaryIds.vector[i]);
+  if (!dictionary.isValid()) {
+setNullValue(column, i);
+((LongColumnVector) column).vector[i] = 0;
+  }
 }
   }
   break;
 case BYTE:
   for (int i = rowId; i < rowId + num; ++i) {
-((LongColumnVector) column).vector[i] =
-dictionary.readTinyInt((int) dictionaryIds.vector[i]);
-if (!dictionary.isValid()) {
-  setNullValue(column, i);
-  ((LongColumnVector) column).vector[i] = 0;
+if (!column.isNull[i]) {
+  ((LongColumnVector) column).vector[i] =
+  dictionary.readTinyInt((int) dictionaryIds.vector[i]);
+  if (!dictionary.isValid()) {
+setNullValue(column, i);
+((LongColumnVector) column).vector[i] = 0;
+  }
 }
   }
   break;
 case SHORT:
   for (int i = rowId; i < rowId + num; ++i) {
-((LongColumnVector) column).vector[i] =
-dictionary.readSmallInt((int) dictionaryIds.vector[i]);
-if (!dictionary.isValid()) {
-  setNullValue(column, i);
-  ((LongColumnVector) column).vector[i] = 0;
+if (!column.isNull[i]) {
+  ((LongColumnVector) column).vector[i] =
+  dictionary.readSmallInt((int) dictionaryIds.vector[i]);
+  if (!dictionary.isValid()) {
+setNullValue(column, i);
+((LongColumnVector) column).vector[i] = 0;
+  }
 }
   }
   break;
@@ -553,74 +559,92 @@ public class VectorizedPrimitiveColumnReader extends 
BaseVectorizedColumnReader
   DateColumnVector dc = (DateColumnVector) column;
   dc.setUsingProlepticCalendar(true);
   for (int i = rowId; i < rowId + num; ++i) {
-dc.vector[i] =
-skipProlepticConversion ?
-dictionary.readLong((int) dictionaryIds.vector[i]) :
-CalendarUtils.convertDateToProleptic((int) 
dictionary.readLong((int) dictionaryIds.vector[i]));
-if (!dictionary.isValid()) {
-  setNullValue(column, i);
-  dc.vector[i] = 0;
+if (!column.isNull[i]) {
+  dc.vector[i] =
+  skipProlepticConversion ?
+  dictionary.readLong((int) dictionaryIds.vector[i]) :
+  CalendarUtils.convertDateToProleptic((int) 
dictionary.readLong((int) dictionaryIds.vector[i]));
+  if (!dictionary.isValid()) {
+setNullValue(column, i);
+dc.vector[i] = 0;
+  }
 }
   }
   break;
 case INTERVAL_YEAR_MONTH:
 case LONG:
   for (int i = r

[hive] branch master updated: HIVE-26233: Problems reading back PARQUET timestamps above 10000 years (#3295) (Peter Vary reviewed by Stamatis Zampetakis) (Addendum)

2022-05-26 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 264428c3cf8 HIVE-26233: Problems reading back PARQUET timestamps above 
1 years (#3295) (Peter Vary reviewed by  Stamatis Zampetakis) (Addendum)
264428c3cf8 is described below

commit 264428c3cf82b9c73da000b7215bc66038fe6836
Author: Peter Vary 
AuthorDate: Thu May 26 17:22:15 2022 +0200

HIVE-26233: Problems reading back PARQUET timestamps above 1 years 
(#3295) (Peter Vary reviewed by  Stamatis Zampetakis) (Addendum)
---
 .../ql/io/parquet/serde/TestParquetTimestampsHive2Compatibility.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampsHive2Compatibility.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampsHive2Compatibility.java
index 71c3304f842..db06e43d157 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampsHive2Compatibility.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampsHive2Compatibility.java
@@ -175,7 +175,7 @@ class TestParquetTimestampsHive2Compatibility {
 // Exclude dates falling in the default Gregorian change date since legacy 
code does not handle that interval
 // gracefully. It is expected that these do not work well when legacy APIs 
are in use. 
 .filter(s -> !s.startsWith("1582-10"))
-.limit(3), Stream.of("-12-31 23:59:59.999"));
+.limit(3000), Stream.of("-12-31 23:59:59.999"));
   }
 
   private static int digits(int number) {



[hive] branch master updated: HIVE-26233: Problems reading back PARQUET timestamps above 10000 years (#3295) (Peter Vary reviewed by Stamatis Zampetakis)

2022-05-26 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new eb3bfe9e310 HIVE-26233: Problems reading back PARQUET timestamps above 
1 years (#3295) (Peter Vary reviewed by  Stamatis Zampetakis)
eb3bfe9e310 is described below

commit eb3bfe9e31054b7e203f32bde128dcae2556928a
Author: pvary 
AuthorDate: Thu May 26 17:15:35 2022 +0200

HIVE-26233: Problems reading back PARQUET timestamps above 1 years 
(#3295) (Peter Vary reviewed by  Stamatis Zampetakis)
---
 .../hadoop/hive/common/type/TimestampTZUtil.java   | 20 +++-
 .../TestParquetTimestampsHive2Compatibility.java   | 22 --
 2 files changed, 39 insertions(+), 3 deletions(-)

diff --git 
a/common/src/java/org/apache/hadoop/hive/common/type/TimestampTZUtil.java 
b/common/src/java/org/apache/hadoop/hive/common/type/TimestampTZUtil.java
index 1853d4c569d..e71e0e85228 100644
--- a/common/src/java/org/apache/hadoop/hive/common/type/TimestampTZUtil.java
+++ b/common/src/java/org/apache/hadoop/hive/common/type/TimestampTZUtil.java
@@ -31,6 +31,7 @@ import java.time.ZonedDateTime;
 import java.time.format.DateTimeFormatter;
 import java.time.format.DateTimeFormatterBuilder;
 import java.time.format.DateTimeParseException;
+import java.time.format.SignStyle;
 import java.time.format.TextStyle;
 import java.time.temporal.ChronoField;
 import java.time.temporal.TemporalAccessor;
@@ -43,6 +44,13 @@ import org.apache.hive.common.util.DateUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static java.time.temporal.ChronoField.DAY_OF_MONTH;
+import static java.time.temporal.ChronoField.HOUR_OF_DAY;
+import static java.time.temporal.ChronoField.MINUTE_OF_HOUR;
+import static java.time.temporal.ChronoField.MONTH_OF_YEAR;
+import static java.time.temporal.ChronoField.SECOND_OF_MINUTE;
+import static java.time.temporal.ChronoField.YEAR;
+
 public class TimestampTZUtil {
 
   private static final Logger LOG = LoggerFactory.getLogger(TimestampTZ.class);
@@ -50,6 +58,16 @@ public class TimestampTZUtil {
   private static final LocalTime DEFAULT_LOCAL_TIME = LocalTime.of(0, 0);
   private static final Pattern SINGLE_DIGIT_PATTERN = 
Pattern.compile("[\\+-]\\d:\\d\\d");
 
+  private static final DateTimeFormatter TIMESTAMP_FORMATTER = new 
DateTimeFormatterBuilder()
+  // Date and Time Parts
+  .appendValue(YEAR, 4, 10, 
SignStyle.NORMAL).appendLiteral('-').appendValue(MONTH_OF_YEAR, 2, 2, 
SignStyle.NORMAL)
+  .appendLiteral('-').appendValue(DAY_OF_MONTH, 2, 2, SignStyle.NORMAL)
+  .appendLiteral(" ").appendValue(HOUR_OF_DAY, 2, 2, 
SignStyle.NORMAL).appendLiteral(':')
+  .appendValue(MINUTE_OF_HOUR, 2, 2, SignStyle.NORMAL).appendLiteral(':')
+  .appendValue(SECOND_OF_MINUTE, 2, 2, SignStyle.NORMAL)
+  // Fractional Part (Optional)
+  .optionalStart().appendFraction(ChronoField.NANO_OF_SECOND, 0, 9, 
true).optionalEnd().toFormatter();
+
   static final DateTimeFormatter FORMATTER;
   static {
 DateTimeFormatterBuilder builder = new DateTimeFormatterBuilder();
@@ -168,7 +186,7 @@ public class TimestampTZUtil {
   try {
 DateFormat formatter = getLegacyDateFormatter();
 formatter.setTimeZone(TimeZone.getTimeZone(fromZone));
-java.util.Date date = formatter.parse(ts.toString());
+java.util.Date date = formatter.parse(ts.format(TIMESTAMP_FORMATTER));
 // Set the formatter to use a different timezone
 formatter.setTimeZone(TimeZone.getTimeZone(toZone));
 Timestamp result = Timestamp.valueOf(formatter.format(date));
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampsHive2Compatibility.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampsHive2Compatibility.java
index 733964a3183..71c3304f842 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampsHive2Compatibility.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/serde/TestParquetTimestampsHive2Compatibility.java
@@ -79,6 +79,24 @@ class TestParquetTimestampsHive2Compatibility {
 assertEquals(timestampString, ts.toString());
   }
 
+  /**
+   * Tests that timestamps written using Hive2 APIs are read correctly by 
Hive4 APIs when legacy conversion is on.
+   */
+  @ParameterizedTest(name = "{0}")
+  @MethodSource("generateTimestamps")
+  void testWriteHive2ReadHive4UsingLegacyConversionWithZone(String 
timestampString) {
+TimeZone original = TimeZone.getDefault();
+try {
+  String zoneId = "US/Pacific";
+  TimeZone.setDefault(TimeZone.getTimeZone(zoneId));
+  NanoTime nt = writeHive2(timestampString);
+  Timestamp ts = readHive4(nt, zoneId, true);
+  assertEquals(timestampString, ts.toStri

[hive] branch master updated: HIVE-26261: Fix some issues with Spark engine removal (#3320) (Peter Vary reviewed by Zoltan Haindrich)

2022-05-26 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new bd7b8a10336 HIVE-26261: Fix some issues with Spark engine removal 
(#3320) (Peter Vary reviewed by Zoltan Haindrich)
bd7b8a10336 is described below

commit bd7b8a1033645fa780509e8d2a6943578e1f4e08
Author: pvary 
AuthorDate: Thu May 26 10:43:01 2022 +0200

HIVE-26261: Fix some issues with Spark engine removal (#3320) (Peter Vary 
reviewed by Zoltan Haindrich)
---
 .../ql/exec/persistence/MapJoinTableContainerSerDe.java| 14 --
 .../authorization/command/CommandAuthorizerV2.java |  9 +
 2 files changed, 9 insertions(+), 14 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinTableContainerSerDe.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinTableContainerSerDe.java
index 514a8c92fb5..6f675f44a23 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinTableContainerSerDe.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinTableContainerSerDe.java
@@ -170,18 +170,4 @@ public class MapJoinTableContainerSerDe {
   throw new HiveException(msg, e);
 }
   }
-
-  // Get an empty container when the small table is empty.
-  private static MapJoinTableContainer getDefaultEmptyContainer(Configuration 
hconf,
-  MapJoinObjectSerDeContext keyCtx, MapJoinObjectSerDeContext valCtx) 
throws SerDeException {
-boolean useOptimizedContainer = HiveConf.getBoolVar(
-hconf, HiveConf.ConfVars.HIVEMAPJOINUSEOPTIMIZEDTABLE);
-if (useOptimizedContainer) {
-  return new MapJoinBytesTableContainer(hconf, valCtx, -1, 0);
-}
-MapJoinTableContainer container = new HashMapWrapper();
-container.setSerde(keyCtx, valCtx);
-container.seal();
-return container;
-  }
 }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/command/CommandAuthorizerV2.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/command/CommandAuthorizerV2.java
index 114d9b3186a..13281980cc1 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/command/CommandAuthorizerV2.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/command/CommandAuthorizerV2.java
@@ -158,6 +158,15 @@ final class CommandAuthorizerV2 {
 if (TableType.MATERIALIZED_VIEW.name().equals(tableType) || 
TableType.VIRTUAL_VIEW.name().equals(tableType)) {
   isView = true;
 }
+if (isView) {
+  Map params = t.getParameters();
+  if (params != null && params.containsKey(authorizedKeyword)) {
+String authorizedValue = params.get(authorizedKeyword);
+if ("false".equalsIgnoreCase(authorizedValue)) {
+  return true;
+}
+  }
+}
 return false;
   }
 



[hive] branch master updated: HIVE-26235: OR Condition on binary column is returning empty result (#3305) (Peter Vary, reviewed by Laszlo Bodor)

2022-05-25 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new a22ba9dafad HIVE-26235: OR Condition on binary column is returning 
empty result (#3305) (Peter Vary, reviewed by Laszlo Bodor)
a22ba9dafad is described below

commit a22ba9dafad4bfda0c5c0d2c63eaf83293d6fd64
Author: pvary 
AuthorDate: Thu May 26 06:42:34 2022 +0200

HIVE-26235: OR Condition on binary column is returning empty result (#3305) 
(Peter Vary, reviewed by Laszlo Bodor)
---
 .../hadoop/hive/ql/udf/generic/GenericUDFIn.java   | 22 --
 ql/src/test/queries/clientpositive/udf_in_binary.q |  8 +++
 .../clientpositive/llap/udf_in_binary.q.out| 79 ++
 3 files changed, 103 insertions(+), 6 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIn.java 
b/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIn.java
index 0a2ae14502f..24852e1b728 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIn.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIn.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.hive.ql.udf.generic;
 
+import java.nio.ByteBuffer;
 import java.util.HashSet;
 import java.util.Set;
 
@@ -26,6 +27,7 @@ import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
 import org.apache.hadoop.hive.ql.exec.UDFArgumentLengthException;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
 import 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFUtils.ReturnObjectInspectorResolver;
+import org.apache.hadoop.hive.serde.serdeConstants;
 import org.apache.hadoop.hive.serde2.objectinspector.ConstantObjectInspector;
 import org.apache.hadoop.hive.serde2.objectinspector.ListObjectInspector;
 import org.apache.hadoop.hive.serde2.objectinspector.MapObjectInspector;
@@ -123,9 +125,14 @@ public class GenericUDFIn extends GenericUDF {
 constantInSet = new HashSet();
 if (compareOI.getCategory().equals(ObjectInspector.Category.PRIMITIVE)) {
   for (int i = 1; i < arguments.length; ++i) {
-constantInSet.add(((PrimitiveObjectInspector) compareOI)
+Object constant = ((PrimitiveObjectInspector) compareOI)
 .getPrimitiveJavaObject(conversionHelper
-.convertIfNecessary(arguments[i].get(), argumentOIs[i])));
+.convertIfNecessary(arguments[i].get(), argumentOIs[i]));
+if (compareOI.getTypeName().equals(serdeConstants.BINARY_TYPE_NAME)) {
+  constantInSet.add(ByteBuffer.wrap((byte[]) constant));
+} else {
+  constantInSet.add(constant);
+}
   }
 } else {
   for (int i = 1; i < arguments.length; ++i) {
@@ -148,9 +155,13 @@ public class GenericUDFIn extends GenericUDF {
   }
   switch (compareOI.getCategory()) {
   case PRIMITIVE: {
-if (constantInSet.contains(((PrimitiveObjectInspector) compareOI)
-
.getPrimitiveJavaObject(conversionHelper.convertIfNecessary(arguments[0].get(),
-argumentOIs[0] {
+Object arg = ((PrimitiveObjectInspector) compareOI)
+
.getPrimitiveJavaObject(conversionHelper.convertIfNecessary(arguments[0].get(),
+argumentOIs[0]));
+if (compareOI.getTypeName().equals(serdeConstants.BINARY_TYPE_NAME)) {
+  arg = ByteBuffer.wrap((byte[]) arg);
+}
+if (constantInSet.contains(arg)) {
   bw.set(true);
   return bw;
 }
@@ -226,5 +237,4 @@ public class GenericUDFIn extends GenericUDF {
 sb.append(")");
 return sb.toString();
   }
-
 }
diff --git a/ql/src/test/queries/clientpositive/udf_in_binary.q 
b/ql/src/test/queries/clientpositive/udf_in_binary.q
new file mode 100644
index 000..a27dcb586cb
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/udf_in_binary.q
@@ -0,0 +1,8 @@
+create table test_binary(data_col timestamp, binary_col binary) partitioned by 
(ts string);
+insert into test_binary partition(ts='20220420') values ('2022-04-20 
00:00:00.0', 'a'),
+('2022-04-20 00:00:00.0', 'b'),('2022-04-20 00:00:00.0', 'c'),('2022-04-20 
00:00:00.0', NULL);
+select * from test_binary where ts='20220420' and binary_col = unhex('61');
+select * from test_binary where ts='20220420' and binary_col between 
unhex('61') and unhex('62');
+select * from test_binary where binary_col = unhex('61') or binary_col = 
unhex('62');
+select * from test_binary where ts='20220420' and (binary_col = 
unhex('61') or binary_col = unhex('62'));
+select * from test_binary where binary_col in (unhex('61'), unhex('62'));
diff --git a/ql/src/test/results/clientpositive/llap/udf_in_binary.q.out 
b/ql/src/test/results/clientpositive/llap/udf_in_binary.q.out
new file mode 100644
index 000..0cd8876219e
--- /dev/null
+++ b/ql/src/test/results/cli

[hive] branch master updated: Disable flaky tests

2022-05-24 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 147b46e8cd4 Disable flaky tests
147b46e8cd4 is described below

commit 147b46e8cd4b9c934817eb45d8e501b6611cfff6
Author: Peter Vary 
AuthorDate: Tue May 24 15:41:42 2022 +0200

Disable flaky tests
---
 .../hadoop/hive/ql/parse/repl/metric/TestReplicationMetricSink.java | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/parse/repl/metric/TestReplicationMetricSink.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/parse/repl/metric/TestReplicationMetricSink.java
index 8c359b4fe54..1b3cfd3ddec 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/parse/repl/metric/TestReplicationMetricSink.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/parse/repl/metric/TestReplicationMetricSink.java
@@ -88,6 +88,7 @@ public class TestReplicationMetricSink {
 return deserializer.deSerializeGenericString(msg);
   }
 
+  @org.junit.Ignore("HIVE-26262")
   @Test
   public void testSuccessBootstrapDumpMetrics() throws Exception {
 ReplicationMetricCollector bootstrapDumpMetricCollector = new 
BootstrapDumpMetricCollector(
@@ -313,6 +314,7 @@ public class TestReplicationMetricSink {
 }
   }
 
+  @org.junit.Ignore("HIVE-26262")
   @Test
   public void testReplStatsInMetrics() throws HiveException, 
InterruptedException, TException {
 int origRMProgress = ReplStatsTracker.RM_PROGRESS_LENGTH;



[hive] branch master updated: HIVE-26258: Provide an option for enable locking of external tables (#3313) (Peter Vary reviewed by Denys Kuzmenko)

2022-05-24 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new dee417faa73 HIVE-26258: Provide an option for enable locking of 
external tables (#3313) (Peter Vary reviewed by Denys Kuzmenko)
dee417faa73 is described below

commit dee417faa7341b827f8289814f0aacdeeed82abb
Author: pvary 
AuthorDate: Tue May 24 13:58:15 2022 +0200

HIVE-26258: Provide an option for enable locking of external tables (#3313) 
(Peter Vary reviewed by Denys Kuzmenko)
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |  6 
 .../org/apache/hadoop/hive/ql/io/AcidUtils.java| 15 +
 .../hadoop/hive/ql/lockmgr/TestDbTxnManager2.java  | 36 ++
 3 files changed, 51 insertions(+), 6 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index a14872995b5..a0cd44583cc 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -3008,6 +3008,12 @@ public class HiveConf extends Configuration {
 "and hive.exec.dynamic.partition.mode (nonstrict).\n" +
 "The default DummyTxnManager replicates pre-Hive-0.13 behavior and 
provides\n" +
 "no transactions."),
+HIVE_TXN_EXT_LOCKING_ENABLED("hive.txn.ext.locking.enabled", false,
+"When enabled use standard R/W lock semantics based on 
hive.txn.strict.locking.mode for external resources,\n" +
+"e.g. INSERT will acquire lock based on 
hive.txn.strict.locking.mode\n" +
+"(exclusive if it is true, shared if that is false),\n" +
+"SELECT will acquire shared lock based on 
hive.txn.nonacid.read.locks.\n" +
+"When disabled no locks are acquired for external resources."),
 HIVE_TXN_STRICT_LOCKING_MODE("hive.txn.strict.locking.mode", true, "In 
strict mode non-ACID\n" +
 "resources use standard R/W lock semantics, e.g. INSERT will acquire 
exclusive lock.\n" +
 "In nonstrict mode, for non-ACID resources, INSERT will only acquire 
shared lock, which\n" +
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
index 70fa42bad66..42ab15e9625 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
@@ -2844,12 +2844,12 @@ public class AcidUtils {
 tblProps.remove(hive_metastoreConstants.TABLE_TRANSACTIONAL_PROPERTIES);
   }
 
-  private static boolean needsLock(Entity entity) {
+  private static boolean needsLock(Entity entity, boolean isExternalEnabled) {
 switch (entity.getType()) {
 case TABLE:
-  return isLockableTable(entity.getTable());
+  return isLockableTable(entity.getTable(), isExternalEnabled);
 case PARTITION:
-  return isLockableTable(entity.getPartition().getTable());
+  return isLockableTable(entity.getPartition().getTable(), 
isExternalEnabled);
 default:
   return true;
 }
@@ -2863,7 +2863,7 @@ public class AcidUtils {
 return t;
   }
 
-  private static boolean isLockableTable(Table t) {
+  private static boolean isLockableTable(Table t, boolean isExternalEnabled) {
 if (t.isTemporary()) {
   return false;
 }
@@ -2871,6 +2871,8 @@ public class AcidUtils {
 case MANAGED_TABLE:
 case MATERIALIZED_VIEW:
   return true;
+case EXTERNAL_TABLE:
+  return isExternalEnabled;
 default:
   return false;
 }
@@ -2890,6 +2892,7 @@ public class AcidUtils {
 boolean skipReadLock = !conf.getBoolVar(ConfVars.HIVE_TXN_READ_LOCKS);
 boolean skipNonAcidReadLock = 
!conf.getBoolVar(ConfVars.HIVE_TXN_NONACID_READ_LOCKS);
 boolean sharedWrite = !conf.getBoolVar(HiveConf.ConfVars.TXN_WRITE_X_LOCK);
+boolean isExternalEnabled = 
conf.getBoolVar(HiveConf.ConfVars.HIVE_TXN_EXT_LOCKING_ENABLED);
 boolean isMerge = operation == Context.Operation.MERGE;
 
 // We don't want to acquire read locks during update or delete as we'll be 
acquiring write
@@ -2898,7 +2901,7 @@ public class AcidUtils {
   .filter(input -> !input.isDummy()
 && input.needsLock()
 && !input.isUpdateOrDelete()
-&& AcidUtils.needsLock(input)
+&& AcidUtils.needsLock(input, isExternalEnabled)
 && !skipReadLock)
   .collect(Collectors.toList());
 
@@ -2959,7 +2962,7 @@ public class AcidUtils {
 for (WriteEntity output : outputs) {
   LOG.debug("output is null " + (output == null));
   if (output.getType() == Entity.Type.DFS_DIR || output.getType() == 
Entity.Type.LOCAL_D

[hive] branch master updated: HIVE-26202: Refactor Iceberg Writers (Peter Vary reviewed by Laszlo Pinter) (#3269)

2022-05-12 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new a1906b9f00 HIVE-26202: Refactor Iceberg Writers (Peter Vary reviewed 
by Laszlo Pinter) (#3269)
a1906b9f00 is described below

commit a1906b9f00a2ac182d10951cbc5e4c30b40aadc9
Author: pvary 
AuthorDate: Thu May 12 15:53:16 2022 +0200

HIVE-26202: Refactor Iceberg Writers (Peter Vary reviewed by Laszlo Pinter) 
(#3269)
---
 .../mr/hive/HiveIcebergOutputCommitter.java|  12 +-
 .../iceberg/mr/hive/HiveIcebergOutputFormat.java   |  51 +++-
 .../apache/iceberg/mr/hive/HiveIcebergSerDe.java   |  22 ++--
 .../iceberg/mr/hive/HiveIcebergStorageHandler.java |  18 +--
 .../hive/{ => writer}/HiveFileWriterFactory.java   |   6 +-
 .../HiveIcebergBufferedDeleteWriter.java   |  30 +++--
 .../hive/{ => writer}/HiveIcebergDeleteWriter.java |  17 +--
 .../hive/{ => writer}/HiveIcebergRecordWriter.java |  15 ++-
 .../hive/{ => writer}/HiveIcebergUpdateWriter.java |  32 ++---
 .../mr/hive/{ => writer}/HiveIcebergWriter.java|   9 +-
 .../hive/{ => writer}/HiveIcebergWriterBase.java   |  39 +-
 .../iceberg/mr/hive/writer/WriterBuilder.java  | 133 +
 .../iceberg/mr/hive/writer/WriterRegistry.java |  44 +++
 .../iceberg/mr/mapreduce/IcebergInputFormat.java   |  34 --
 .../mr/hive/TestHiveIcebergOutputCommitter.java|  37 ++
 .../{ => writer}/HiveIcebergWriterTestBase.java|  14 ++-
 .../{ => writer}/TestHiveIcebergDeleteWriter.java  |  27 +
 .../{ => writer}/TestHiveIcebergUpdateWriter.java  |  37 ++
 18 files changed, 340 insertions(+), 237 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java
index 0ea882a450..825ee3dc39 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java
@@ -61,6 +61,8 @@ import org.apache.iceberg.io.FileIO;
 import org.apache.iceberg.io.OutputFile;
 import org.apache.iceberg.mr.Catalogs;
 import org.apache.iceberg.mr.InputFormatConfig;
+import org.apache.iceberg.mr.hive.writer.HiveIcebergWriter;
+import org.apache.iceberg.mr.hive.writer.WriterRegistry;
 import 
org.apache.iceberg.relocated.com.google.common.annotations.VisibleForTesting;
 import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
 import org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;
@@ -107,7 +109,7 @@ public class HiveIcebergOutputCommitter extends 
OutputCommitter {
 TaskAttemptID attemptID = context.getTaskAttemptID();
 JobConf jobConf = context.getJobConf();
 Collection outputs = 
HiveIcebergStorageHandler.outputTables(context.getJobConf());
-Map writers = 
Optional.ofNullable(HiveIcebergWriterBase.getWriters(attemptID))
+Map writers = 
Optional.ofNullable(WriterRegistry.writers(attemptID))
 .orElseGet(() -> {
   LOG.info("CommitTask found no writers for output tables: {}, 
attemptID: {}", outputs, attemptID);
   return ImmutableMap.of();
@@ -146,7 +148,7 @@ public class HiveIcebergOutputCommitter extends 
OutputCommitter {
 }
 
 // remove the writer to release the object
-HiveIcebergWriterBase.removeWriters(attemptID);
+WriterRegistry.removeWriters(attemptID);
   }
 
   /**
@@ -159,7 +161,7 @@ public class HiveIcebergOutputCommitter extends 
OutputCommitter {
 TaskAttemptContext context = 
TezUtil.enrichContextWithAttemptWrapper(originalContext);
 
 // Clean up writer data from the local store
-Map writers = 
HiveIcebergWriterBase.removeWriters(context.getTaskAttemptID());
+Map writers = 
WriterRegistry.removeWriters(context.getTaskAttemptID());
 
 // Remove files if it was not done already
 if (writers != null) {
@@ -335,8 +337,8 @@ public class HiveIcebergOutputCommitter extends 
OutputCommitter {
 if (!conf.getBoolean(InputFormatConfig.IS_OVERWRITE, false)) {
   if (writeResults.isEmpty()) {
 LOG.info(
-"Not creating a new commit for table: {}, jobID: {}, isDelete: {}, 
since there were no new files to add",
-table, jobContext.getJobID(), 
HiveIcebergStorageHandler.isDelete(conf, name));
+"Not creating a new commit for table: {}, jobID: {}, operation: 
{}, since there were no new files to add",
+table, jobContext.getJobID(), 
HiveIcebergStorageHandler.operation(conf, name));
   } else {
 commitWrite(table, startTime, writeResults);
   }
diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveI

[hive] branch master updated: HIVE-26136: Implement UPDATE statements for Iceberg tables (Peter Vary reviewed by Laszlo Pinter) (#3204)

2022-05-08 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new c7f6043322 HIVE-26136: Implement UPDATE statements for Iceberg tables 
(Peter Vary reviewed by Laszlo Pinter) (#3204)
c7f6043322 is described below

commit c7f60433226b125af5ae281d0fb14a2e937d0787
Author: pvary 
AuthorDate: Mon May 9 07:36:12 2022 +0200

HIVE-26136: Implement UPDATE statements for Iceberg tables (Peter Vary 
reviewed by Laszlo Pinter) (#3204)
---
 .../org/apache/iceberg/mr/hive/FilesForCommit.java |  13 ++
 .../mr/hive/HiveIcebergOutputCommitter.java| 128 ---
 .../iceberg/mr/hive/HiveIcebergOutputFormat.java   |  11 +-
 .../apache/iceberg/mr/hive/HiveIcebergSerDe.java   |  46 +++---
 .../iceberg/mr/hive/HiveIcebergStorageHandler.java |  51 --
 .../apache/iceberg/mr/hive/IcebergAcidUtil.java|  16 +-
 .../iceberg/mr/mapreduce/IcebergInputFormat.java   |   8 +-
 .../java/org/apache/iceberg/mr/TestHelper.java |  23 +++
 .../apache/iceberg/mr/hive/TestHiveIcebergV2.java  | 180 +
 .../org/apache/iceberg/mr/hive/TestHiveShell.java  |   1 +
 .../org/apache/iceberg/mr/hive/TestTables.java |   9 ++
 .../positive/update_iceberg_partitioned_avro.q |  26 +++
 .../positive/update_iceberg_partitioned_orc.q  |  26 +++
 .../positive/update_iceberg_partitioned_parquet.q  |  26 +++
 .../update_iceberg_unpartitioned_parquet.q |  26 +++
 .../positive/delete_iceberg_partitioned_avro.q.out |   4 +-
 .../positive/delete_iceberg_partitioned_orc.q.out  |   4 +-
 .../delete_iceberg_partitioned_parquet.q.out   |   4 +-
 .../delete_iceberg_unpartitioned_parquet.q.out |   4 +-
 ...q.out => update_iceberg_partitioned_avro.q.out} |  64 +---
 q.out => update_iceberg_partitioned_orc.q.out} |  64 +---
 ...ut => update_iceberg_partitioned_parquet.q.out} |  64 +---
 ... => update_iceberg_unpartitioned_parquet.q.out} |  64 +---
 .../org/apache/hadoop/hive/ql/io/AcidUtils.java|  19 +++
 .../hive/ql/metadata/HiveStorageHandler.java   |  12 +-
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 116 -
 .../ql/parse/UpdateDeleteSemanticAnalyzer.java |  47 --
 27 files changed, 809 insertions(+), 247 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/FilesForCommit.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/FilesForCommit.java
index 237ef55369..953edfd0d2 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/FilesForCommit.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/FilesForCommit.java
@@ -27,6 +27,7 @@ import java.util.stream.Stream;
 import org.apache.iceberg.ContentFile;
 import org.apache.iceberg.DataFile;
 import org.apache.iceberg.DeleteFile;
+import org.apache.iceberg.relocated.com.google.common.base.MoreObjects;
 
 public class FilesForCommit implements Serializable {
 
@@ -61,4 +62,16 @@ public class FilesForCommit implements Serializable {
   public Collection allFiles() {
 return Stream.concat(dataFiles.stream(), 
deleteFiles.stream()).collect(Collectors.toList());
   }
+
+  public boolean isEmpty() {
+return dataFiles.isEmpty() && deleteFiles.isEmpty();
+  }
+
+  @Override
+  public String toString() {
+return MoreObjects.toStringHelper(this)
+.add("dataFiles", dataFiles.toString())
+.add("deleteFiles", deleteFiles.toString())
+.toString();
+  }
 }
diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java
index 00edc3bd2e..0ea882a450 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java
@@ -24,7 +24,6 @@ import java.io.ObjectInputStream;
 import java.io.ObjectOutputStream;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.List;
 import java.util.Map;
 import java.util.Optional;
 import java.util.Properties;
@@ -33,6 +32,7 @@ import java.util.concurrent.ConcurrentLinkedQueue;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.stream.Collectors;
+import java.util.stream.Stream;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
@@ -54,7 +54,6 @@ import org.apache.iceberg.DeleteFiles;
 import org.apache.iceberg.ReplacePartitions;
 import org.apache.iceberg.RowDelta;
 import org.apache.iceberg.Table;
-import org.apache.iceberg.Transaction;
 import org.apache.icebe

[hive] branch master updated: HIVE-26200: Add tests for Iceberg DELETE statements for every supported type (Peter Vary reviewed by Laszlo Pinter) (#3268)

2022-05-06 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 1bdac5106e HIVE-26200: Add tests for Iceberg DELETE statements for 
every supported type (Peter Vary reviewed by Laszlo Pinter) (#3268)
1bdac5106e is described below

commit 1bdac5106ea623b5799a60df5de16ffb08a70698
Author: pvary 
AuthorDate: Fri May 6 08:47:47 2022 +0200

HIVE-26200: Add tests for Iceberg DELETE statements for every supported 
type (Peter Vary reviewed by Laszlo Pinter) (#3268)
---
 .../iceberg/mr/hive/TestHiveIcebergInserts.java|  5 +++-
 .../apache/iceberg/mr/hive/TestHiveIcebergV2.java  | 29 ++
 2 files changed, 33 insertions(+), 1 deletion(-)

diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergInserts.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergInserts.java
index f38eea1969..7e3c72bf31 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergInserts.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergInserts.java
@@ -70,14 +70,17 @@ public class TestHiveIcebergInserts extends 
HiveIcebergStorageHandlerWithEngineB
   public void testInsertSupportedTypes() throws IOException {
 for (int i = 0; i < SUPPORTED_TYPES.size(); i++) {
   Type type = SUPPORTED_TYPES.get(i);
+
   // TODO: remove this filter when issue #1881 is resolved
   if (type == Types.UUIDType.get() && fileFormat == FileFormat.PARQUET) {
 continue;
   }
+
   // TODO: remove this filter when we figure out how we could test binary 
types
-  if (type.equals(Types.BinaryType.get()) || 
type.equals(Types.FixedType.ofLength(5))) {
+  if (type == Types.BinaryType.get() || 
type.equals(Types.FixedType.ofLength(5))) {
 continue;
   }
+
   String columnName = type.typeId().toString().toLowerCase() + "_column";
 
   Schema schema = new Schema(required(1, "id", Types.LongType.get()), 
required(2, columnName, type));
diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergV2.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergV2.java
index 569a9d3fc3..1c9d3e1922 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergV2.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergV2.java
@@ -34,12 +34,14 @@ import org.apache.iceberg.deletes.PositionDelete;
 import org.apache.iceberg.mr.TestHelper;
 import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
 import org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;
+import org.apache.iceberg.types.Type;
 import org.apache.iceberg.types.Types;
 import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.Test;
 
 import static org.apache.iceberg.types.Types.NestedField.optional;
+import static org.apache.iceberg.types.Types.NestedField.required;
 
 /**
  * Tests Format V2 specific features, such as reading/writing V2 tables, using 
delete files, etc.
@@ -354,6 +356,33 @@ public class TestHiveIcebergV2 extends 
HiveIcebergStorageHandlerWithEngineBase {
 HiveIcebergTestUtils.validateData(expected, 
HiveIcebergTestUtils.valueForRow(newSchema, objects), 0);
   }
 
+  @Test
+  public void testDeleteForSupportedTypes() throws IOException {
+for (int i = 0; i < SUPPORTED_TYPES.size(); i++) {
+  Type type = SUPPORTED_TYPES.get(i);
+
+  // TODO: remove this filter when issue #1881 is resolved
+  if (type == Types.UUIDType.get() && fileFormat == FileFormat.PARQUET) {
+continue;
+  }
+
+  // TODO: remove this filter when we figure out how we could test binary 
types
+  if (type == Types.BinaryType.get() || 
type.equals(Types.FixedType.ofLength(5))) {
+continue;
+  }
+
+  String tableName = type.typeId().toString().toLowerCase() + "_table_" + 
i;
+  String columnName = type.typeId().toString().toLowerCase() + "_column";
+
+  Schema schema = new Schema(required(1, columnName, type));
+  List records = TestHelper.generateRandomRecords(schema, 1, 0L);
+  Table table = testTables.createTable(shell, tableName, schema, 
fileFormat, records, 2);
+
+  shell.executeStatement("DELETE FROM " + tableName);
+  HiveIcebergTestUtils.validateData(table, ImmutableList.of(), 0);
+}
+  }
+
   private static  PositionDelete positionDelete(CharSequence path, long 
pos, T row) {
 PositionDelete positionDelete = PositionDelete.create();
 return positionDelete.set(path, pos, row);



[hive] branch master updated: HIVE-26183: Create delete writer for the UPDATE statements (Peter Vary reviewed by Adam Szita and Marton Bod) (#3251)

2022-05-04 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 9067431bee HIVE-26183: Create delete writer for the UPDATE statements 
(Peter Vary reviewed by Adam Szita and Marton Bod) (#3251)
9067431bee is described below

commit 9067431bee431c3f51124e30c6551ed04b2e3d22
Author: pvary 
AuthorDate: Wed May 4 10:50:04 2022 +0200

HIVE-26183: Create delete writer for the UPDATE statements (Peter Vary 
reviewed by Adam Szita and Marton Bod) (#3251)
---
 iceberg/iceberg-handler/pom.xml|   7 +-
 .../org/apache/iceberg/mr/hive/FilesForCommit.java |  18 +-
 .../mr/hive/HiveIcebergBufferedDeleteWriter.java   | 181 +
 .../iceberg/mr/hive/HiveIcebergDeleteWriter.java   |   4 +-
 .../mr/hive/HiveIcebergOutputCommitter.java|   6 +-
 .../iceberg/mr/hive/HiveIcebergOutputFormat.java   |   4 +-
 .../iceberg/mr/hive/HiveIcebergRecordWriter.java   |   6 +-
 .../iceberg/mr/hive/HiveIcebergUpdateWriter.java   |  89 ++
 .../apache/iceberg/mr/hive/HiveIcebergWriter.java  |  84 +-
 ...ebergWriter.java => HiveIcebergWriterBase.java} |  18 +-
 .../apache/iceberg/mr/hive/IcebergAcidUtil.java| 108 +---
 .../org/apache/iceberg/data/IcebergGenerics2.java  | 103 
 .../java/org/apache/iceberg/mr/TestHelper.java |  27 +++
 .../iceberg/mr/hive/HiveIcebergWriterTestBase.java | 151 +
 .../mr/hive/TestHiveIcebergDeleteWriter.java   | 116 +
 .../mr/hive/TestHiveIcebergOutputCommitter.java|   4 +-
 .../mr/hive/TestHiveIcebergUpdateWriter.java   | 159 ++
 iceberg/pom.xml|   6 +
 18 files changed, 963 insertions(+), 128 deletions(-)

diff --git a/iceberg/iceberg-handler/pom.xml b/iceberg/iceberg-handler/pom.xml
index 6a37fdbf16..20ae1e6ec9 100644
--- a/iceberg/iceberg-handler/pom.xml
+++ b/iceberg/iceberg-handler/pom.xml
@@ -100,11 +100,16 @@
   assertj-core
   test
 
+
+  org.apache.iceberg
+  iceberg-core
+  tests
+  test
+
 
   org.roaringbitmap
   RoaringBitmap
   0.9.22
-  test
 
   
   
diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/FilesForCommit.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/FilesForCommit.java
index 0dd490628c..237ef55369 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/FilesForCommit.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/FilesForCommit.java
@@ -20,8 +20,8 @@
 package org.apache.iceberg.mr.hive;
 
 import java.io.Serializable;
+import java.util.Collection;
 import java.util.Collections;
-import java.util.List;
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
 import org.apache.iceberg.ContentFile;
@@ -30,19 +30,19 @@ import org.apache.iceberg.DeleteFile;
 
 public class FilesForCommit implements Serializable {
 
-  private final List dataFiles;
-  private final List deleteFiles;
+  private final Collection dataFiles;
+  private final Collection deleteFiles;
 
-  public FilesForCommit(List dataFiles, List 
deleteFiles) {
+  public FilesForCommit(Collection dataFiles, Collection 
deleteFiles) {
 this.dataFiles = dataFiles;
 this.deleteFiles = deleteFiles;
   }
 
-  public static FilesForCommit onlyDelete(List deleteFiles) {
+  public static FilesForCommit onlyDelete(Collection deleteFiles) {
 return new FilesForCommit(Collections.emptyList(), deleteFiles);
   }
 
-  public static FilesForCommit onlyData(List dataFiles) {
+  public static FilesForCommit onlyData(Collection dataFiles) {
 return new FilesForCommit(dataFiles, Collections.emptyList());
   }
 
@@ -50,15 +50,15 @@ public class FilesForCommit implements Serializable {
 return new FilesForCommit(Collections.emptyList(), 
Collections.emptyList());
   }
 
-  public List dataFiles() {
+  public Collection dataFiles() {
 return dataFiles;
   }
 
-  public List deleteFiles() {
+  public Collection deleteFiles() {
 return deleteFiles;
   }
 
-  public List allFiles() {
+  public Collection allFiles() {
 return Stream.concat(dataFiles.stream(), 
deleteFiles.stream()).collect(Collectors.toList());
   }
 }
diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergBufferedDeleteWriter.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergBufferedDeleteWriter.java
new file mode 100644
index 00..99d59341ed
--- /dev/null
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergBufferedDeleteWriter.java
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distribu

[hive] branch master updated: HIVE-26193: Fix Iceberg partitioned tables null bucket handling (Peter Vary reviewed by Marton Bod and Adam Szita) (#3261)

2022-05-02 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 94b383c82a HIVE-26193: Fix Iceberg partitioned tables null bucket 
handling (Peter Vary reviewed by Marton Bod and Adam Szita) (#3261)
94b383c82a is described below

commit 94b383c82a7c5d2e619c3bab84dfca88b4591b4e
Author: pvary 
AuthorDate: Mon May 2 08:35:29 2022 +0200

HIVE-26193: Fix Iceberg partitioned tables null bucket handling (Peter Vary 
reviewed by Marton Bod and Adam Szita) (#3261)
---
 .../iceberg/mr/hive/GenericUDFIcebergBucket.java   |  2 +-
 .../queries/positive/dynamic_partition_writes.q|  4 +-
 .../positive/dynamic_partition_writes.q.out| 43 ++
 .../hive/ql/exec/vector/VectorizedRowBatchCtx.java |  4 +-
 4 files changed, 32 insertions(+), 21 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/GenericUDFIcebergBucket.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/GenericUDFIcebergBucket.java
index cab4bb11ab..52b0a1edbf 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/GenericUDFIcebergBucket.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/GenericUDFIcebergBucket.java
@@ -185,7 +185,7 @@ public class GenericUDFIcebergBucket extends GenericUDF {
   public Object evaluate(DeferredObject[] arguments) throws HiveException {
 
 DeferredObject argument = arguments[0];
-if (argument == null) {
+if (argument == null || argument.get() == null) {
   return null;
 } else {
   evaluator.apply(argument);
diff --git 
a/iceberg/iceberg-handler/src/test/queries/positive/dynamic_partition_writes.q 
b/iceberg/iceberg-handler/src/test/queries/positive/dynamic_partition_writes.q
index 93094529f6..622e870350 100644
--- 
a/iceberg/iceberg-handler/src/test/queries/positive/dynamic_partition_writes.q
+++ 
b/iceberg/iceberg-handler/src/test/queries/positive/dynamic_partition_writes.q
@@ -10,9 +10,9 @@ drop table if exists tbl_target_mixed;
 
 
 create external table tbl_src (a int, b string, c bigint) stored by iceberg 
stored as orc;
-insert into tbl_src values (1, 'EUR', 10), (2, 'EUR', 10), (3, 'USD', 11), (4, 
'EUR', 12), (5, 'HUF', 30), (6, 'USD', 10), (7, 'USD', 100), (8, 'PLN', 20), 
(9, 'PLN', 11), (10, 'CZK', 5);
+insert into tbl_src values (1, 'EUR', 10), (2, 'EUR', 10), (3, 'USD', 11), (4, 
'EUR', 12), (5, 'HUF', 30), (6, 'USD', 10), (7, 'USD', 100), (8, 'PLN', 20), 
(9, 'PLN', 11), (10, 'CZK', 5), (12, NULL, NULL);
 --need at least 2 files to ensure ClusteredWriter encounters out-of-order 
records
-insert into tbl_src values (10, 'EUR', 12), (20, 'EUR', 11), (30, 'USD', 100), 
(40, 'EUR', 10), (50, 'HUF', 30), (60, 'USD', 12), (70, 'USD', 20), (80, 'PLN', 
100), (90, 'PLN', 18), (100, 'CZK', 12);
+insert into tbl_src values (10, 'EUR', 12), (20, 'EUR', 11), (30, 'USD', 100), 
(40, 'EUR', 10), (50, 'HUF', 30), (60, 'USD', 12), (70, 'USD', 20), (80, 'PLN', 
100), (90, 'PLN', 18), (100, 'CZK', 12), (110, NULL, NULL);
 
 create external table tbl_target_identity (a int) partitioned by (ccy string) 
stored by iceberg stored as orc;
 explain insert overwrite table tbl_target_identity select a, b from tbl_src;
diff --git 
a/iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
 
b/iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
index 36fb39c091..53be8c39ed 100644
--- 
a/iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
+++ 
b/iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
@@ -22,19 +22,19 @@ POSTHOOK: query: create external table tbl_src (a int, b 
string, c bigint) store
 POSTHOOK: type: CREATETABLE
 POSTHOOK: Output: database:default
 POSTHOOK: Output: default@tbl_src
-PREHOOK: query: insert into tbl_src values (1, 'EUR', 10), (2, 'EUR', 10), (3, 
'USD', 11), (4, 'EUR', 12), (5, 'HUF', 30), (6, 'USD', 10), (7, 'USD', 100), 
(8, 'PLN', 20), (9, 'PLN', 11), (10, 'CZK', 5)
+PREHOOK: query: insert into tbl_src values (1, 'EUR', 10), (2, 'EUR', 10), (3, 
'USD', 11), (4, 'EUR', 12), (5, 'HUF', 30), (6, 'USD', 10), (7, 'USD', 100), 
(8, 'PLN', 20), (9, 'PLN', 11), (10, 'CZK', 5), (12, NULL, NULL)
 PREHOOK: type: QUERY
 PREHOOK: Input: _dummy_database@_dummy_table
 PREHOOK: Output: default@tbl_src
-POSTHOOK: query: insert into tbl_src values (1, 'EUR', 10), (2, 'EUR', 10), 
(3, 'USD', 11), (4, 'EUR', 12), (5, 'HUF', 30), (6, 'USD', 10), (7, 'USD', 
100), (8, 'PLN', 20), (9, 'PLN', 11), (10, 'CZK', 5)
+POSTHOOK: query: insert into tbl_src values (1, 'EUR', 10), (2, 'EUR', 10), 
(3, 'USD', 11), (4, 'EUR', 12), (5, 'HUF', 30), (6, 'USD', 10), (7, 'USD', 
100), (8, 'PLN', 20), (9, 'PLN', 11), (10, 'CZK', 5), (12, NULL, NULL)
 POSTHOOK: type: QUERY
 POSTHOOK: Input

[hive] branch master updated: HIVE-25758: OOM due to recursive application of CBO rules (javadoc fix) (Alessandro Solimando reviewed by Peter Vary) (#3252)

2022-04-28 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new d8f13060713 HIVE-25758: OOM due to recursive application of CBO rules 
(javadoc fix) (Alessandro Solimando reviewed by Peter Vary) (#3252)
d8f13060713 is described below

commit d8f1306071363268fcfb9d83299e8a1417d77a3d
Author: Alessandro Solimando 
AuthorDate: Thu Apr 28 10:01:37 2022 +0200

HIVE-25758: OOM due to recursive application of CBO rules (javadoc fix) 
(Alessandro Solimando reviewed by Peter Vary) (#3252)
---
 .../org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java
index 264756f0413..b2506e524c4 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java
@@ -1253,8 +1253,8 @@ public class HiveCalciteUtil {
* Returns whether the expression has disjunctions (OR) at any level of 
nesting.
* 
*  Example 1: OR(=($0, $1), IS NOT NULL($2))):INTEGER (OR in the 
top-level expression) 
-   *  Example 2: NOT(AND(=($0, $1), IS NOT NULL($2)) 
-   *   this is equivalent to OR((($0, $1), IS NULL($2))
+   *  Example 2: NOT(AND(=($0, $1), IS NOT NULL($2))
+   *   this is equivalent to OR((($0, $1), IS NULL($2)) 
*  Example 3: AND(OR(=($0, $1), IS NOT NULL($2 (OR in inner 
expression) 
* 
* @param node the expression where to look for disjunctions.



[hive] branch master updated: HIVE-26114: Fix jdbc connection hivesrerver2 using dfs command with prefix space will cause exception (Shezm reviewed b Peter Vary) (#3176)

2022-04-27 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 7e4a25b9dd2 HIVE-26114: Fix jdbc connection hivesrerver2 using dfs 
command with prefix space will cause exception (Shezm reviewed b Peter Vary) 
(#3176)
7e4a25b9dd2 is described below

commit 7e4a25b9dd2ce418eccc859aa8f544e70999398d
Author: shezm <505306...@qq.com>
AuthorDate: Wed Apr 27 14:44:58 2022 +0800

HIVE-26114: Fix jdbc connection hivesrerver2 using dfs command with prefix 
space will cause exception (Shezm reviewed b Peter Vary) (#3176)
---
 .../cli/operation/HiveCommandOperation.java|  2 +-
 .../cli/operation/TestCommandWithSpace.java| 53 ++
 2 files changed, 54 insertions(+), 1 deletion(-)

diff --git 
a/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java
 
b/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java
index c517049f090..bad602d9f98 100644
--- 
a/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java
+++ 
b/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java
@@ -107,7 +107,7 @@ public class HiveCommandOperation extends 
ExecuteStatementOperation {
 setState(OperationState.RUNNING);
 try {
   String command = getStatement().trim();
-  String[] tokens = statement.split("\\s");
+  String[] tokens = command.split("\\s");
   String commandArgs = command.substring(tokens[0].length()).trim();
 
   CommandProcessorResponse response = commandProcessor.run(commandArgs);
diff --git 
a/service/src/test/org/apache/hive/service/cli/operation/TestCommandWithSpace.java
 
b/service/src/test/org/apache/hive/service/cli/operation/TestCommandWithSpace.java
new file mode 100644
index 000..5c57357c84b
--- /dev/null
+++ 
b/service/src/test/org/apache/hive/service/cli/operation/TestCommandWithSpace.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hive.service.cli.operation;
+
+import com.google.common.collect.ImmutableMap;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.hooks.TestQueryHooks;
+import org.apache.hadoop.hive.ql.processors.DfsProcessor;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.service.cli.HiveSQLException;
+import org.apache.hive.service.cli.session.HiveSession;
+import org.junit.Test;
+
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+public class TestCommandWithSpace {
+
+@Test
+public void testCommandWithPrefixSpace() throws IllegalAccessException, 
ClassNotFoundException, InstantiationException, HiveSQLException {
+String query = " dfs -ls /";
+HiveConf conf = new HiveConf();
+conf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
+conf.setVar(HiveConf.ConfVars.HIVE_AUTHORIZATION_MANAGER,
+
"org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory");
+
+SessionState.start(conf);
+HiveSession mockHiveSession = mock(HiveSession.class);
+when(mockHiveSession.getHiveConf()).thenReturn(conf);
+when(mockHiveSession.getSessionState()).thenReturn(SessionState.get());
+DfsProcessor dfsProcessor = new DfsProcessor(new Configuration());
+HiveCommandOperation sqlOperation = new 
HiveCommandOperation(mockHiveSession, query, dfsProcessor, ImmutableMap.of());
+sqlOperation.run();
+}
+
+}



[hive] branch master updated: HIVE-26171: HMSHandler get_all_tables method can not retrieve tables from remote database (Butao Zhang reviewed by Peter Vary) (#3238)

2022-04-26 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 3c5613fa88f HIVE-26171: HMSHandler get_all_tables method can not 
retrieve tables from remote database (Butao Zhang reviewed by Peter Vary) 
(#3238)
3c5613fa88f is described below

commit 3c5613fa88f35f81df944b241d95a6f78ef71d7d
Author: Butao Zhang <9760681+zhangbu...@users.noreply.github.com>
AuthorDate: Tue Apr 26 16:42:09 2022 +0800

HIVE-26171: HMSHandler get_all_tables method can not retrieve tables from 
remote database (Butao Zhang reviewed by Peter Vary) (#3238)
---
 .../src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java | 7 +++
 1 file changed, 7 insertions(+)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
index 1f8365e3140..32ed701b03b 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
@@ -6234,6 +6234,13 @@ public class HMSHandler extends FacebookBase implements 
IHMSHandler {
 List ret = null;
 Exception ex = null;
 String[] parsedDbName = parseDbName(dbname, conf);
+try {
+  if (isDatabaseRemote(dbname)) {
+Database db = get_database_core(parsedDbName[CAT_NAME], 
parsedDbName[DB_NAME]);
+return 
DataConnectorProviderFactory.getDataConnectorProvider(db).getTableNames();
+  }
+} catch (Exception e) { /* ignore */ }
+
 try {
   ret = getMS().getAllTables(parsedDbName[CAT_NAME], 
parsedDbName[DB_NAME]);
   ret = FilterUtils.filterTableNamesIfEnabled(isServerFilterEnabled, 
filterHook,



[hive] branch master updated: HIVE-26092: Fix javadoc errors for the 4.0.0 release (Peter Vary reviewed by Zoltan Haindrich) (#3185) (addendum)

2022-04-13 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 53784e0800 HIVE-26092: Fix javadoc errors for the 4.0.0 release (Peter 
Vary reviewed by Zoltan Haindrich) (#3185) (addendum)
53784e0800 is described below

commit 53784e08000ebc3364be5c0fb2d796305b7b6844
Author: Peter Vary 
AuthorDate: Wed Apr 13 08:28:09 2022 +0200

HIVE-26092: Fix javadoc errors for the 4.0.0 release (Peter Vary reviewed 
by Zoltan Haindrich) (#3185) (addendum)
---
 .../java/org/apache/hadoop/hive/ql/metadata/Hive.java  |  1 -
 .../hadoop/hive/ql/metadata/HiveStorageHandler.java| 18 +-
 2 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 
b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
index 23c6d85d03..4ed822aa7b 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
@@ -2252,7 +2252,6 @@ public class Hive {
* specified sql query text. It is guaranteed that it will always return an 
up-to-date version wrt metastore.
* This method filters out outdated Materialized views. It compares the 
transaction ids of the passed usedTables and
* the materialized view using the txnMgr.
-   * @param querySql extended query text (has fully qualified identifiers)
* @param tablesUsed List of tables to verify whether materialized view is 
outdated
* @param txnMgr Transaction manager to get open transactions affects used 
tables.
* @return List of materialized views has matching query definition with 
querySql
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStorageHandler.java 
b/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStorageHandler.java
index 4fcc288050..0784fe65ad 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStorageHandler.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStorageHandler.java
@@ -282,7 +282,7 @@ public interface HiveStorageHandler extends Configurable {
*   AcidSupportType.WITH_TRANSACTIONS - ACID operations are supported, 
and must use a valid HiveTxnManager to wrap
*   the operation in a transaction, like in the case of standard Hive ACID 
tables
*   AcidSupportType.WITHOUT_TRANSACTIONS - ACID operations are 
supported, and there is no need for a HiveTxnManager
-   *   to open/close transactions for the operation, i.e. {@link 
org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager}
+   *   to open/close transactions for the operation, i.e. 
org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager
*   can be used
* 
*
@@ -307,12 +307,12 @@ public interface HiveStorageHandler extends Configurable {
 
   /**
* {@link org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer} 
rewrites DELETE/UPDATE queries into INSERT
-   * queries. E.g. DELETE FROM T WHERE A = 32 is rewritten into INSERT INTO T 
SELECT  FROM T WHERE A = 32
-   * SORT BY .
+   * queries. E.g. DELETE FROM T WHERE A = 32 is rewritten into
+   * INSERT INTO T SELECT selectCols FROM T WHERE A = 32 SORT BY 
sortCols.
*
-   * This method specifies which columns should be injected into the 
 part of the rewritten query.
+   * This method specifies which columns should be injected into the 
selectCols part of the rewritten query.
*
-   * Should only return a non-empty list if {@link 
HiveStorageHandler#supportsAcidOperations()} ()} returns something
+   * Should only return a non-empty list if {@link 
HiveStorageHandler#supportsAcidOperations()} returns something
* other NONE.
*
* @param table the table which is being deleted/updated/merged into
@@ -324,12 +324,12 @@ public interface HiveStorageHandler extends Configurable {
 
   /**
* {@link org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer} 
rewrites DELETE/UPDATE queries into INSERT
-   * queries. E.g. DELETE FROM T WHERE A = 32 is rewritten into INSERT INTO T 
SELECT  FROM T WHERE A = 32
-   * SORT BY .
+   * queries. E.g. DELETE FROM T WHERE A = 32 is rewritten into
+   * INSERT INTO T SELECT selectCols FROM T WHERE A = 32 SORT BY 
sortCols.
*
-   * This method specifies which columns should be injected into the 
 part of the rewritten query.
+   * This method specifies which columns should be injected into the 
sortCols part of the rewritten query.
*
-   * Should only return a non-empty list if {@link 
HiveStorageHandler#supportsAcidOperations()} ()} returns something
+   * Should only return a non-empty list if {@link 
HiveStorageHandler#supportsAcidOperations()} returns something
* other NONE.
*
* @param table the table which is being deleted/updated/merged into



[hive] branch master updated: HIVE-26092: Fix javadoc errors for the 4.0.0 release (Peter Vary reviewed by Zoltan Haindrich) (#3185)

2022-04-12 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 431e7d9e54 HIVE-26092: Fix javadoc errors for the 4.0.0 release (Peter 
Vary reviewed by Zoltan Haindrich) (#3185)
431e7d9e54 is described below

commit 431e7d9e5431a808106d8db81e11aea74f040da5
Author: pvary 
AuthorDate: Tue Apr 12 13:52:52 2022 +0200

HIVE-26092: Fix javadoc errors for the 4.0.0 release (Peter Vary reviewed 
by Zoltan Haindrich) (#3185)
---
 Jenkinsfile| 12 
 .../format/datetime/HiveSqlDateTimeFormatter.java  | 32 +++---
 .../org/apache/hadoop/hive/common/type/Date.java   |  6 ++--
 .../apache/hadoop/hive/common/type/Timestamp.java  |  6 ++--
 .../apache/hive/common/util/TimestampParser.java   |  2 +-
 .../hive/benchmark/calcite/FieldTrimmerBench.java  |  6 ++--
 .../apache/hive/benchmark/hash/Murmur3Bench.java   | 10 +++
 .../hive/benchmark/serde/LazySimpleSerDeBench.java |  6 ++--
 .../vectorization/VectorizedArithmeticBench.java   | 10 +++
 .../vectorization/VectorizedComparisonBench.java   | 10 +++
 .../vectorization/VectorizedLikeBench.java | 10 +++
 .../vectorization/VectorizedLogicBench.java| 10 +++
 .../hive/ql/qoption/QTestOptionDispatcher.java |  2 +-
 .../org/apache/hive/jdbc/HiveBaseResultSet.java|  4 +--
 .../apache/hive/jdbc/saml/IJdbcBrowserClient.java  |  2 +-
 .../org/apache/hadoop/hive/llap/io/api/LlapIo.java |  2 +-
 .../security/DefaultJwtSharedSecretProvider.java   |  2 +-
 .../tezplugins/metrics/LlapMetricsListener.java|  2 +-
 .../org/apache/hadoop/hive/llap/LlapHiveUtils.java |  3 +-
 .../ql/ddl/table/info/desc/DescTableAnalyzer.java  |  6 ++--
 .../hadoop/hive/ql/exec/AddToClassPathAction.java  |  4 +--
 .../java/org/apache/hadoop/hive/ql/exec/Task.java  |  2 +-
 .../org/apache/hadoop/hive/ql/exec/Utilities.java  |  3 +-
 .../hive/ql/exec/WindowFunctionDescription.java| 26 +-
 .../hive/ql/exec/repl/OptimisedBootstrapUtils.java |  4 +--
 .../hadoop/hive/ql/exec/repl/ReplStatsTracker.java |  2 +-
 .../apache/hadoop/hive/ql/exec/tez/DagUtils.java   |  1 -
 .../expressions/CastDateToCharWithFormat.java  |  2 +-
 .../expressions/CastDateToStringWithFormat.java|  2 +-
 .../expressions/CastDateToVarCharWithFormat.java   |  2 +-
 .../expressions/CastStringToDateWithFormat.java|  2 +-
 .../CastStringToTimestampWithFormat.java   |  2 +-
 .../expressions/CastTimestampToCharWithFormat.java |  2 +-
 .../CastTimestampToStringWithFormat.java   |  2 +-
 .../CastTimestampToVarCharWithFormat.java  |  2 +-
 .../apache/hadoop/hive/ql/io/AcidInputFormat.java  |  2 +-
 .../hadoop/hive/ql/io/orc/encoded/StreamUtils.java |  1 -
 .../hive/ql/log/syslog/SyslogInputFormat.java  |  2 +-
 .../hadoop/hive/ql/log/syslog/SyslogParser.java| 11 
 .../org/apache/hadoop/hive/ql/metadata/Hive.java   |  2 +-
 .../hive/ql/metadata/HiveStorageHandler.java   |  2 +-
 .../hive/ql/optimizer/ParallelEdgeFixer.java   |  2 +-
 .../hive/ql/optimizer/SemiJoinReductionMerge.java  |  6 ++--
 .../calcite/functions/HiveMergeableAggregate.java  |  2 +-
 .../calcite/rules/HiveAggregateSortLimitRule.java  |  2 +-
 .../ql/optimizer/calcite/rules/HiveDruidRules.java |  2 +-
 .../calcite/rules/HiveHepExtractRelNodeRule.java   |  3 +-
 .../HiveProjectSortExchangeTransposeRule.java  |  2 +-
 .../rules/HiveRewriteToDataSketchesRules.java  |  6 ++--
 ...regateInsertDeleteIncrementalRewritingRule.java | 14 +-
 ...iveAggregateInsertIncrementalRewritingRule.java |  6 ++--
 ...AggregatePartitionIncrementalRewritingRule.java |  4 +--
 ...veJoinInsertDeleteIncrementalRewritingRule.java |  4 +--
 .../calcite/translator/RexNodeConverter.java   |  4 +--
 .../hive/ql/optimizer/topnkey/CommonKeyPrefix.java |  6 ++--
 .../ql/optimizer/topnkey/TopNKeyProcessor.java |  2 +-
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  1 -
 .../hadoop/hive/ql/parse/UnparseTranslator.java|  4 +--
 .../hadoop/hive/ql/parse/type/FunctionHelper.java  |  2 +-
 .../hive/ql/txn/compactor/CompactorThread.java |  2 +-
 .../hive/ql/udf/generic/GenericUDAFRank.java   |  2 +-
 .../hive/ql/udf/generic/GenericUDFCastFormat.java  |  2 +-
 .../org/apache/hadoop/hive/serde2/JsonSerDe.java   |  4 +--
 .../hadoop/hive/serde2/json/BinaryEncoding.java|  2 +-
 .../hadoop/hive/serde2/json/HiveJsonReader.java|  6 ++--
 .../hive/service/cli/operation/QueryInfoCache.java |  2 +-
 .../org/apache/hadoop/hive/shims/HadoopShims.java  |  2 --
 .../hadoop/hive/metastore/HiveMetaStoreClient.java |  2 +-
 .../hadoop/hive/metastore/IMetaStoreClient.java| 11 ++--
 .../hadoop/hive/metastore/utils/FileUtils.java |  1 -
 .../hadoop/hive/metastore/ExceptionHandler.java|  4

[hive] branch master updated: HIVE-26093 Deduplicate org.apache.hadoop.hive.metastore.annotation package-info.java (Peter Vary reviewed by Stamatis Zampetakis) (#3168)

2022-04-11 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new f9ef4eeb60 HIVE-26093 Deduplicate 
org.apache.hadoop.hive.metastore.annotation package-info.java (Peter Vary 
reviewed by Stamatis Zampetakis) (#3168)
f9ef4eeb60 is described below

commit f9ef4eeb605302e42ebda196a901f865bee0efb4
Author: pvary 
AuthorDate: Mon Apr 11 14:04:41 2022 +0200

HIVE-26093 Deduplicate org.apache.hadoop.hive.metastore.annotation 
package-info.java (Peter Vary reviewed by Stamatis Zampetakis) (#3168)
---
 .gitignore |  2 -
 pom.xml|  2 +-
 .../hive/metastore/utils/MetastoreVersionInfo.java |  0
 standalone-metastore/metastore-server/pom.xml  | 17 
 .../src/main/resources/saveVersion.sh  | 91 --
 standalone-metastore/pom.xml   | 23 ++
 6 files changed, 24 insertions(+), 111 deletions(-)

diff --git a/.gitignore b/.gitignore
index b421ed4912..e6b54e9161 100644
--- a/.gitignore
+++ b/.gitignore
@@ -20,9 +20,7 @@ target/
 conf/hive-default.xml.template
 .DS_Store
 .factorypath
-standalone-metastore/src/gen/version
 standalone-metastore/metastore-common/src/gen/version
-standalone-metastore/metastore-server/src/gen/version
 kafka-handler/src/test/gen
 **/.vscode/
 /.recommenders/
diff --git a/pom.xml b/pom.xml
index 8f60e798b0..05394698a1 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1813,7 +1813,7 @@
 org.apache.maven.plugins
 maven-javadoc-plugin
 
-  -Xdoclint:none
+  none
   false
 
 
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetastoreVersionInfo.java
 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/utils/MetastoreVersionInfo.java
similarity index 100%
rename from 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetastoreVersionInfo.java
rename to 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/utils/MetastoreVersionInfo.java
diff --git a/standalone-metastore/metastore-server/pom.xml 
b/standalone-metastore/metastore-server/pom.xml
index 624f667c86..a50e9bc416 100644
--- a/standalone-metastore/metastore-server/pom.xml
+++ b/standalone-metastore/metastore-server/pom.xml
@@ -474,23 +474,6 @@
   
 
   
-  
-generate-version-annotation
-generate-sources
-
-  
-
-  
-  
-  
-  
-
-  
-
-
-  run
-
-  
   
 setup-metastore-scripts
 process-test-resources
diff --git 
a/standalone-metastore/metastore-server/src/main/resources/saveVersion.sh 
b/standalone-metastore/metastore-server/src/main/resources/saveVersion.sh
deleted file mode 100755
index 0d1a463fdd..00
--- a/standalone-metastore/metastore-server/src/main/resources/saveVersion.sh
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-# This file is used to generate the package-info.java class that
-# records the version, revision, branch, user, timestamp, and url
-unset LANG
-unset LC_CTYPE
-unset LC_TIME
-version=$1
-shortversion=$2
-src_dir=$3
-revision=$4
-branch=$5
-url=$6
-user=`whoami`
-date=`date`
-dir=`pwd`
-cwd=`dirname $dir`
-if [ "$revision" = "" ]; then
-if git rev-parse HEAD 2>/dev/null > /dev/null ; then
-revision=`git log -1 --pretty=format:"%H"`
-hostname=`hostname`
-branch=`git branch | sed -n -e 's/^* //p'`
-url="git://${hostname}${cwd}"
-elif [ -d .svn ]; then
-revision=`svn info ../ | sed -n -e 's/Last Changed Rev: \(.*\)/\1/p'`
-url=`svn info ../ | sed -n -e 's

[hive] branch master updated: HIVE-26103: Port Iceberg fixes to the iceberg module (#3164)

2022-04-04 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 6e25a0151d HIVE-26103: Port Iceberg fixes to the iceberg module (#3164)
6e25a0151d is described below

commit 6e25a0151d54d2119a2a71e1212f606c8ca55a37
Author: pvary 
AuthorDate: Mon Apr 4 13:04:04 2022 +0200

HIVE-26103: Port Iceberg fixes to the iceberg module (#3164)

* Source Iceberg PR - Core: Remove deprecated APIs up to 0.13.0

* Revert "HIVE-25563: Iceberg table operations hang a long time if metadata 
is missing/corrupted (Adam Szita, reviewed by Marton Bod)" - applying instead  
Hive: Limit number of retries when metadata file is missing (#3379)

This reverts commit 7b600fe38f03b9790b193171a65e57f6a6970820.

* Source Iceberg PR - Hive: Limit number of retries when metadata file is 
missing (#3379)

* Source Iceberg PR - Hive: Fix RetryingMetaStoreClient for Hive 2.1 (#3403)

* Source Iceberg PR - Switch from new HashMap to Maps.newHashMap (#3648)

* Source Iceberg PR - Hive: HiveCatalog should remove HMS stats for certain 
engines based on config (#3652) - Use the Iceberg config property

* Source Iceberg PR - Core: If status check fails, commit should be unknown 
(#3717)

* Source Iceberg PR - Build: Add checkstyle rule for instantiating HashMap, 
HashSet, ArrayList (#3689)

* Source Iceberg PR - Test: Make sure to delete temp folders (#3790)

* Source Iceberg PR - API: Register existing tables in Iceberg HiveCatalog 
(#3851)

* Source Iceberg PR - Hive: Make Iceberg table filter optional in 
HiveCatalog (#3908)

* Source Iceberg PR - Core: Add reserved UUID Table Property and Expose in 
HMS. (#3914)

* Source Iceberg PR - Hive: Known exception should not become 
CommitStateUnknownException (#4261)

* Source Iceberg PR - Build: Add missing @OverRide annotations (#3654)
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |  4 -
 iceberg/checkstyle/checkstyle.xml  | 15 
 .../java/org/apache/iceberg/hive/HiveCatalog.java  | 76 ++---
 .../java/org/apache/iceberg/hive/HiveCatalogs.java | 47 --
 .../org/apache/iceberg/hive/HiveClientPool.java|  9 +-
 .../apache/iceberg/hive/HiveSchemaConverter.java   |  4 +-
 .../org/apache/iceberg/hive/HiveSchemaUtil.java| 16 ++--
 .../apache/iceberg/hive/HiveTableOperations.java   | 62 --
 .../org/apache/iceberg/hive/HiveMetastoreTest.java |  6 +-
 .../org/apache/iceberg/hive/HiveTableTest.java | 75 +---
 .../org/apache/iceberg/hive/TestHiveCatalog.java   | 33 +++-
 .../org/apache/iceberg/hive/TestHiveCommits.java   | 36 ++--
 .../org/apache/iceberg/hive/TestHiveMetastore.java | 99 +-
 .../apache/iceberg/hive/TestHiveSchemaUtil.java|  8 +-
 .../org/apache/iceberg/mr/hive/Deserializer.java   |  7 +-
 .../iceberg/mr/hive/HiveIcebergStorageHandler.java |  3 +-
 .../org/apache/iceberg/mr/hive/HiveTableUtil.java  |  6 +-
 .../mr/mapreduce/IcebergInternalRecordWrapper.java |  4 +-
 .../HiveIcebergStorageHandlerWithEngineBase.java   | 10 +--
 .../iceberg/mr/hive/HiveIcebergTestUtils.java  |  8 +-
 .../iceberg/mr/hive/TestHiveIcebergInserts.java|  9 +-
 .../mr/hive/TestHiveIcebergOutputCommitter.java|  4 +-
 .../mr/hive/TestHiveIcebergSchemaEvolution.java|  4 +-
 .../iceberg/mr/hive/TestHiveIcebergStatistics.java |  8 +-
 .../TestHiveIcebergStorageHandlerLocalScan.java|  9 +-
 .../hive/TestHiveIcebergStorageHandlerNoScan.java  | 15 ++--
 .../TestHiveIcebergStorageHandlerTimezone.java |  2 +-
 ...eIcebergStorageHandlerWithMultipleCatalogs.java |  8 +-
 .../org/apache/iceberg/mr/hive/TestHiveShell.java  |  6 +-
 .../iceberg/mr/hive/TestIcebergInputFormats.java   | 12 +--
 .../org/apache/iceberg/mr/hive/TestTables.java |  9 +-
 .../positive/alter_multi_part_table_to_iceberg.q   |  2 +
 .../queries/positive/alter_part_table_to_iceberg.q |  2 +
 .../test/queries/positive/alter_table_to_iceberg.q |  2 +
 .../test/queries/positive/create_iceberg_table.q   |  2 +
 .../create_iceberg_table_stored_as_fileformat.q|  2 +
 .../create_iceberg_table_stored_by_iceberg.q   |  2 +
 ..._table_stored_by_iceberg_with_serdeproperties.q |  2 +
 .../test/queries/positive/describe_iceberg_table.q |  2 +
 .../queries/positive/show_create_iceberg_table.q   |  3 +
 .../positive/truncate_force_iceberg_table.q|  2 +
 .../test/queries/positive/truncate_iceberg_table.q |  2 +
 .../positive/truncate_partitioned_iceberg_table.q  |  2 +
 .../alter_multi_part_table_to_iceberg.q.out|  3 +
 .../positive/alter_part_table_to_iceberg.q.out |  3 +
 .../results/positive/alter_table_to_iceberg.q.out  |  3 +
 .../results/positive/create_iceberg_table.q.o

[hive] branch master updated: HIVE-26101: Port Iceberg Hive fix - Hive: Avoid recursive listing in HiveCatalog#renameTable (Peter Vary reviewed by Marton Bod) (#3163)

2022-04-01 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 4641f6e  HIVE-26101: Port Iceberg Hive fix - Hive: Avoid recursive 
listing in HiveCatalog#renameTable (Peter Vary reviewed by Marton Bod) (#3163)
4641f6e is described below

commit 4641f6e484915cb8dfa5abedfc4732b68c88a758
Author: pvary 
AuthorDate: Fri Apr 1 13:51:27 2022 +0200

HIVE-26101: Port Iceberg Hive fix - Hive: Avoid recursive listing in 
HiveCatalog#renameTable (Peter Vary reviewed by Marton Bod) (#3163)
---
 .../java/org/apache/iceberg/hive/HiveCatalog.java  |  2 +-
 .../apache/iceberg/hive/HiveTableOperations.java   | 16 ++---
 .../org/apache/iceberg/hive/MetastoreUtil.java | 27 ++
 3 files changed, 30 insertions(+), 15 deletions(-)

diff --git 
a/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/HiveCatalog.java
 
b/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/HiveCatalog.java
index 880d60d..4737dd6 100644
--- 
a/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/HiveCatalog.java
+++ 
b/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/HiveCatalog.java
@@ -214,7 +214,7 @@ public class HiveCatalog extends BaseMetastoreCatalog 
implements SupportsNamespa
   table.setTableName(to.name());
 
   clients.run(client -> {
-client.alter_table(fromDatabase, fromName, table);
+MetastoreUtil.alterTable(client, fromDatabase, fromName, table);
 return null;
   });
 
diff --git 
a/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java
 
b/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java
index ccde59c..3ad450c 100644
--- 
a/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java
+++ 
b/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java
@@ -39,7 +39,6 @@ import org.apache.hadoop.hive.common.StatsSetupConst;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.IMetaStoreClient;
 import org.apache.hadoop.hive.metastore.TableType;
-import org.apache.hadoop.hive.metastore.api.EnvironmentContext;
 import org.apache.hadoop.hive.metastore.api.LockComponent;
 import org.apache.hadoop.hive.metastore.api.LockLevel;
 import org.apache.hadoop.hive.metastore.api.LockRequest;
@@ -57,7 +56,6 @@ import org.apache.iceberg.Snapshot;
 import org.apache.iceberg.SnapshotSummary;
 import org.apache.iceberg.TableMetadata;
 import org.apache.iceberg.TableProperties;
-import org.apache.iceberg.common.DynMethods;
 import org.apache.iceberg.exceptions.AlreadyExistsException;
 import org.apache.iceberg.exceptions.CommitFailedException;
 import org.apache.iceberg.exceptions.CommitStateUnknownException;
@@ -93,14 +91,7 @@ public class HiveTableOperations extends 
BaseMetastoreTableOperations {
   private static final long HIVE_LOCK_CHECK_MIN_WAIT_MS_DEFAULT = 50; // 50 
milliseconds
   private static final long HIVE_LOCK_CHECK_MAX_WAIT_MS_DEFAULT = 5 * 1000; // 
5 seconds
   private static final long HIVE_TABLE_LEVEL_LOCK_EVICT_MS_DEFAULT = 
TimeUnit.MINUTES.toMillis(10);
-  private static final DynMethods.UnboundMethod ALTER_TABLE = 
DynMethods.builder("alter_table")
-  .impl(IMetaStoreClient.class, "alter_table_with_environmentContext",
-  String.class, String.class, Table.class, EnvironmentContext.class)
-  .impl(IMetaStoreClient.class, "alter_table",
-  String.class, String.class, Table.class, EnvironmentContext.class)
-  .impl(IMetaStoreClient.class, "alter_table",
-  String.class, String.class, Table.class)
-  .build();
+
   private static final BiMap ICEBERG_TO_HMS_TRANSLATION = 
ImmutableBiMap.of(
   // gc.enabled in Iceberg and external.table.purge in Hive are meant to 
do the same things but with different names
   GC_ENABLED, "external.table.purge"
@@ -310,10 +301,7 @@ public class HiveTableOperations extends 
BaseMetastoreTableOperations {
   void persistTable(Table hmsTable, boolean updateHiveTable) throws 
TException, InterruptedException {
 if (updateHiveTable) {
   metaClients.run(client -> {
-EnvironmentContext envContext = new EnvironmentContext(
-ImmutableMap.of(StatsSetupConst.DO_NOT_UPDATE_STATS, 
StatsSetupConst.TRUE)
-);
-ALTER_TABLE.invoke(client, database, tableName, hmsTable, envContext);
+MetastoreUtil.alterTable(client, database, tableName, hmsTable);
 return null;
   });
 } else {
diff --git 
a/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/MetastoreUtil.java
 
b/iceberg/iceberg-catalog/src/main/java/org/apache/iceberg/hive/MetastoreUtil.java
index ad0ec80..76363f1 100644
--- 
a/icebe

[hive] branch master updated: HIVE-26099: Move patched-iceberg packages to org.apache.hive group (Peter Vary reviewed by Marton Bod) (#3161)

2022-03-31 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ee4f535  HIVE-26099: Move patched-iceberg packages to org.apache.hive 
group (Peter Vary reviewed by Marton Bod) (#3161)
ee4f535 is described below

commit ee4f5355edc2932af04178f9f4ca404ababf5f59
Author: pvary 
AuthorDate: Thu Mar 31 20:51:48 2022 +0200

HIVE-26099: Move patched-iceberg packages to org.apache.hive group (Peter 
Vary reviewed by Marton Bod) (#3161)
---
 iceberg/iceberg-shading/pom.xml  | 2 +-
 iceberg/patched-iceberg-api/pom.xml  | 4 ++--
 iceberg/patched-iceberg-core/pom.xml | 4 ++--
 iceberg/pom.xml  | 4 ++--
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/iceberg/iceberg-shading/pom.xml b/iceberg/iceberg-shading/pom.xml
index fc2eb3b..d451ccf 100644
--- a/iceberg/iceberg-shading/pom.xml
+++ b/iceberg/iceberg-shading/pom.xml
@@ -35,7 +35,7 @@
   
   
 
-  org.apache.iceberg
+  org.apache.hive
   patched-iceberg-core
   true
 
diff --git a/iceberg/patched-iceberg-api/pom.xml 
b/iceberg/patched-iceberg-api/pom.xml
index 2170f7c..4858f34 100644
--- a/iceberg/patched-iceberg-api/pom.xml
+++ b/iceberg/patched-iceberg-api/pom.xml
@@ -7,7 +7,7 @@
 ../pom.xml
   
   4.0.0
-  org.apache.iceberg
+  org.apache.hive
   patched-iceberg-api
   patched-${iceberg.version}-${project.parent.version}
   Patched Iceberg API
@@ -48,7 +48,7 @@
   true
   
${project.build.directory}/classes
   
-
+  
 
   
 
diff --git a/iceberg/patched-iceberg-core/pom.xml 
b/iceberg/patched-iceberg-core/pom.xml
index 73842ba..3de5f88 100644
--- a/iceberg/patched-iceberg-core/pom.xml
+++ b/iceberg/patched-iceberg-core/pom.xml
@@ -7,7 +7,7 @@
 ../pom.xml
   
   4.0.0
-  org.apache.iceberg
+  org.apache.hive
   patched-iceberg-core
   patched-${iceberg.version}-${project.parent.version}
   Patched Iceberg Core
@@ -39,7 +39,7 @@
   true
 
 
-  org.apache.iceberg
+  org.apache.hive
   patched-iceberg-api
 
 
diff --git a/iceberg/pom.xml b/iceberg/pom.xml
index 7c84a50..8025f54 100644
--- a/iceberg/pom.xml
+++ b/iceberg/pom.xml
@@ -48,12 +48,12 @@
   
 
   
-org.apache.iceberg
+org.apache.hive
 patched-iceberg-api
 patched-${iceberg.version}-${project.parent.version}
   
   
-org.apache.iceberg
+org.apache.hive
 patched-iceberg-core
 patched-${iceberg.version}-${project.parent.version}
   


[hive-site] branch main updated: Hive 4.0.0-alpha-1 release.

2022-03-30 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/hive-site.git


The following commit(s) were added to refs/heads/main by this push:
 new 078dc17  Hive 4.0.0-alpha-1 release.
078dc17 is described below

commit 078dc17e566a5e142fa135cf826d6fbe2b066b03
Author: Peter Vary 
AuthorDate: Wed Mar 30 17:44:47 2022 +0200

Hive 4.0.0-alpha-1 release.
---
 downloads.md | 5 +
 javadoc.md   | 1 +
 2 files changed, 6 insertions(+)

diff --git a/downloads.md b/downloads.md
index e368a33..50f56a4 100644
--- a/downloads.md
+++ b/downloads.md
@@ -31,6 +31,10 @@ guaranteed to be stable. For stable releases, look in the 
stable
 directory.
 
 ## News
+### 30 March 2022: release 4.0.0-alpha-1 available
+This release works with Hadoop 3.x.y
+You can look at the complete [JIRA change log for this 
release][HIVE_4_0_0_A_1_CL].
+
 ### 9 June 2021: release 2.3.9 available
 This release works with Hadoop 2.x.y
 You can look at the complete [JIRA change log for this release][HIVE_2_3_9_CL].
@@ -185,6 +189,7 @@ This release  works with Hadoop 0.20.x, 0.23.x.y, 1.x.y, 
2.x.y
 You can look at the complete [JIRA change log for this release][HIVE_10_CL].
 
 [HIVE_DL]: http://www.apache.org/dyn/closer.cgi/hive/
+[HIVE_4_0_0_A_1_CL]: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12351399=Html=12310843
 [HIVE_3_1_2_CL]: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12344397=Html=12310843
 [HIVE_2_3_9_CL]: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12350009=Text=12310843
 [HIVE_2_3_8_CL]: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12349428=Text=12310843
diff --git a/javadoc.md b/javadoc.md
index 16ab3a4..b520563 100644
--- a/javadoc.md
+++ b/javadoc.md
@@ -22,6 +22,7 @@ layout: default
 
 ## Recent versions:
 
+  * [Hive 4.0.0-alpha-1 Javadocs]({{ site.old_javadoc 
}}/r4.0.0-alpha-1/api/index.html)
   * [Hive 3.1.2 Javadocs]({{ site.old_javadoc }}/r3.1.2/api/index.html)
   * [Hive 3.0.0 Javadocs]({{ site.old_javadoc }}/r3.0.0/api/index.html)
   * [Hive 2.3.9 Javadocs]({{ site.old_javadoc }}/r2.3.9/api/index.html)


[hive-site] branch gh-pages updated: Hive 4.0.0-alpha-1 release.

2022-03-30 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/hive-site.git


The following commit(s) were added to refs/heads/gh-pages by this push:
 new 6b491aa  Hive 4.0.0-alpha-1 release.
6b491aa is described below

commit 6b491aa2a67b2bbc7281bcc79b405581f83f83e7
Author: Peter Vary 
AuthorDate: Wed Mar 30 17:44:47 2022 +0200

Hive 4.0.0-alpha-1 release.
---
 downloads.md | 5 +
 javadoc.md   | 1 +
 2 files changed, 6 insertions(+)

diff --git a/downloads.md b/downloads.md
index e368a33..50f56a4 100644
--- a/downloads.md
+++ b/downloads.md
@@ -31,6 +31,10 @@ guaranteed to be stable. For stable releases, look in the 
stable
 directory.
 
 ## News
+### 30 March 2022: release 4.0.0-alpha-1 available
+This release works with Hadoop 3.x.y
+You can look at the complete [JIRA change log for this 
release][HIVE_4_0_0_A_1_CL].
+
 ### 9 June 2021: release 2.3.9 available
 This release works with Hadoop 2.x.y
 You can look at the complete [JIRA change log for this release][HIVE_2_3_9_CL].
@@ -185,6 +189,7 @@ This release  works with Hadoop 0.20.x, 0.23.x.y, 1.x.y, 
2.x.y
 You can look at the complete [JIRA change log for this release][HIVE_10_CL].
 
 [HIVE_DL]: http://www.apache.org/dyn/closer.cgi/hive/
+[HIVE_4_0_0_A_1_CL]: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12351399=Html=12310843
 [HIVE_3_1_2_CL]: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12344397=Html=12310843
 [HIVE_2_3_9_CL]: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12350009=Text=12310843
 [HIVE_2_3_8_CL]: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12349428=Text=12310843
diff --git a/javadoc.md b/javadoc.md
index 16ab3a4..b520563 100644
--- a/javadoc.md
+++ b/javadoc.md
@@ -22,6 +22,7 @@ layout: default
 
 ## Recent versions:
 
+  * [Hive 4.0.0-alpha-1 Javadocs]({{ site.old_javadoc 
}}/r4.0.0-alpha-1/api/index.html)
   * [Hive 3.1.2 Javadocs]({{ site.old_javadoc }}/r3.1.2/api/index.html)
   * [Hive 3.0.0 Javadocs]({{ site.old_javadoc }}/r3.0.0/api/index.html)
   * [Hive 2.3.9 Javadocs]({{ site.old_javadoc }}/r2.3.9/api/index.html)


svn commit: r1078983 - in /websites/production/hive/content/javadocs/r4.0.0-alpha-1: ./ api/ api/org/ api/org/apache/ api/org/apache/hadoop/ api/org/apache/hadoop/fs/ api/org/apache/hadoop/fs/class-us

2022-03-30 Thread pvary
Author: pvary
Date: Wed Mar 30 14:09:54 2022
New Revision: 1078983

Log:
Hive 4.0.0-alpha-1 release.


[This commit notification would consist of 6007 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]


svn commit: r53459 - /dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz /release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz

2022-03-30 Thread pvary
Author: pvary
Date: Wed Mar 30 13:24:04 2022
New Revision: 53459

Log:
Hive 4.0.0-alpha-1 release.

Added:
release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz
  - copied unchanged from r53458, 
dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz
Removed:
dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz



svn commit: r53458 - /dev/hive/hive-4.0.0-alpha-1/

2022-03-30 Thread pvary
Author: pvary
Date: Wed Mar 30 13:16:46 2022
New Revision: 53458

Log:
Hive 4.0.0-alpha-1 release.

Added:
dev/hive/hive-4.0.0-alpha-1/
dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz   (with 
props)
dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.asc
dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.sha256
dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz   (with 
props)
dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.asc
dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.sha256

Added: dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz
==
Binary file - no diff available.

Propchange: dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.asc
==
--- dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.asc (added)
+++ dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.asc Wed 
Mar 30 13:16:46 2022
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEEkH97dYmX9gu6bM3moUZzH9azxfoFAmI8O1kACgkQoUZzH9az
+xfrWoBAAsf2EOngd/mrI1hf0UlCpm9GR8vEein/iQpRSOYNk2j1HMFtyYtrTDtHO
+Zlaf+XV2LnGWdqGg/xZqpgA6PEPGr9JMo053lTDf0Y2camVRrdG8MlhjuloeRen6
+rDhNluTO9CIzVzbkNcIaRNTqS00iCHodrnQUCMOsWbBQXkO6gfFn3emCCwbfIYEk
+606NrZG0YBZ5qb7qLhyyMbYk+oB3qp38vmg9Z6YJV3nNq4sXn5lvUiOccjyDUWFK
+Tcp1Shly2yGNiJkgcm3ITm0A2hejEyQyD0MkKOd2FQaQLTU2fMxi3uplzB1hiWcO
+t7h7z84K65MGdKG9FZQAY+2wHgM9aP4hnJPb3u5TFVMmY2WQlSgKKbMtRT1wRjio
+n3dyKUY7igBWucF6QPtVbRSYSrlObD0NK2+/WzEPV3QfmNDKe/k7DyJ2WRcWHSFE
+U/teFTGZzAuhWXIvrdxK7yGFqw1E3g8m2f8/g4q1zeKFcfm7F2EGJVPhBC8NMjnu
+ksVlsysrcNJlRNx4TOJF9uzgoLYay2MEsg1ghE8NlLT+R4Wka8dKhhipRwITAX/e
+VtnlhK2Ymn1SHij/jG720Hk3Za8Uh929IUHoLIkCLl5AqoHfr3n25bkpbi8nfpYI
+jo2LWMdh971TQKQgBK1EnBvhXb72iLVC7IAD5w4+H1Z1SMRvFfY=
+=k711
+-END PGP SIGNATURE-

Added: dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.sha256
==
--- dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.sha256 
(added)
+++ dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.sha256 Wed 
Mar 30 13:16:46 2022
@@ -0,0 +1 @@
+1e450197dbf847696b05042eb68b78b968064f1f1b369a7fb0b77a6329a27809  
apache-hive-4.0.0-alpha-1-bin.tar.gz

Added: dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz
==
Binary file - no diff available.

Propchange: dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.asc
==
--- dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.asc (added)
+++ dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.asc Wed 
Mar 30 13:16:46 2022
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEEkH97dYmX9gu6bM3moUZzH9azxfoFAmI8O28ACgkQoUZzH9az
+xfp0Mw//audCWztkNBXXfBCRi/wuu5H1Xsq71CnwFfzg2kefSl6Mx02AAk3bUZ7L
+ePb5lPQ023c0g8/e5aiYv1kmYz+PQ80qgRhERYWi+/z6DRhZvtGGl5e9LJagQbFX
+DWFP5UwF+ns6LR5zAQlMhqUD9ySab3allz2UaK+Bo2R2pBWMowZribq9rFGXRD1E
+Vk2S/7zhTjYx5m5tga34lzdocvFJnQ4AV9PdxOTC0G0BQAGGf7/4cFB3lnOpWwjf
+GUY1lmWyd4CfznyuuohmASofcJ3/v9RAFTFeH9HNZt8MWNU4Ta0qDn/kecLfY0tl
+HM3uqHFznvZJRljfUGpajuiJzNWVgq7yBW72B069IsiJiEqaYPKRpHuIaFZNjSvH
+pUPeHG3Qs7to6j/3e2HPtvlzJ47+AjYn2uUSY0iaDmwBoM6ujL5p5zS2O51n1p/o
+Ka0B4v7b7K9rlLboBe9GuNS7mA6x43AjRTQCDGqd/xhN4k7U6ZoOnngkuffVa+cO
+pgHhwV/hhUVAy38bAj+7KYQqNBAaJ59fnANAzq6ZEVr0owXrwaaSNDK5+QNQFsXf
+oYJhReGBDTstRufWXUKFhDTVHsz0Qew+1x6KsC1ILoOUWJws5HjBf1C4+46Sp5uW
+dARSDiNVwQLTpT57j5zuK21a9HdE/lay2hoMguSyggyxvCwVRx0=
+=v6mU
+-END PGP SIGNATURE-

Added: dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.sha256
==
--- dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.sha256 
(added)
+++ dev/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.sha256 Wed 
Mar 30 13:16:46 2022
@@ -0,0 +1 @@
+a21a609ec2e30f8cc656242c545bb3a04de21c2a1eee90808648e3aa4bf3d04e  
apache-hive-4.0.0-alpha-1-src.tar.gz




[hive] branch master updated: HIVE-26064: For Iceberg external table do not set external.table.purge=true by default (Peter Vary reviewed by Marton Bod) (#3132)

2022-03-30 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 6c0b86e  HIVE-26064: For Iceberg external table do not set 
external.table.purge=true by default (Peter Vary reviewed by Marton Bod) (#3132)
6c0b86e is described below

commit 6c0b86ef0cfc67c5acb3468408e1d46fa6ef8024
Author: pvary 
AuthorDate: Wed Mar 30 14:05:55 2022 +0200

HIVE-26064: For Iceberg external table do not set external.table.purge=true 
by default (Peter Vary reviewed by Marton Bod) (#3132)
---
 .../iceberg/mr/hive/HiveIcebergMetaHook.java   |  3 +-
 .../hive/TestHiveIcebergStorageHandlerNoScan.java  | 79 +++--
 .../mr/hive/TestHiveIcebergTruncateTable.java  | 22 ++
 .../truncate_iceberg_table_external_purge_false.q  |  8 ---
 .../test/queries/positive/truncate_iceberg_table.q |  9 +++
 ...uncate_iceberg_table_external_purge_false.q.out | 29 
 .../alter_multi_part_table_to_iceberg.q.out|  3 -
 .../positive/alter_part_table_to_iceberg.q.out |  3 -
 .../results/positive/alter_table_to_iceberg.q.out  |  3 -
 .../results/positive/create_iceberg_table.q.out|  1 -
 ...create_iceberg_table_stored_as_fileformat.q.out |  5 --
 .../create_iceberg_table_stored_by_iceberg.q.out   |  1 -
 ...le_stored_by_iceberg_with_serdeproperties.q.out |  1 -
 .../results/positive/describe_iceberg_table.q.out  |  4 --
 .../positive/show_create_iceberg_table.q.out   |  4 --
 .../results/positive/truncate_iceberg_table.q.out  | 82 ++
 .../table/misc/truncate/TruncateTableAnalyzer.java | 18 +++--
 17 files changed, 180 insertions(+), 95 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergMetaHook.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergMetaHook.java
index cb036dd..cb72480 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergMetaHook.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergMetaHook.java
@@ -96,8 +96,7 @@ import org.slf4j.LoggerFactory;
 public class HiveIcebergMetaHook implements HiveMetaHook {
   private static final Logger LOG = 
LoggerFactory.getLogger(HiveIcebergMetaHook.class);
   public static final Map COMMON_HMS_PROPERTIES = 
ImmutableMap.of(
-  BaseMetastoreTableOperations.TABLE_TYPE_PROP, 
BaseMetastoreTableOperations.ICEBERG_TABLE_TYPE_VALUE.toUpperCase(),
-  InputFormatConfig.EXTERNAL_TABLE_PURGE, "TRUE"
+  BaseMetastoreTableOperations.TABLE_TYPE_PROP, 
BaseMetastoreTableOperations.ICEBERG_TABLE_TYPE_VALUE.toUpperCase()
   );
   private static final Set PARAMETERS_TO_REMOVE = ImmutableSet
   .of(InputFormatConfig.TABLE_SCHEMA, Catalogs.LOCATION, Catalogs.NAME, 
InputFormatConfig.PARTITION_SPEC);
diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergStorageHandlerNoScan.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergStorageHandlerNoScan.java
index 8c45921..cabea6d 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergStorageHandlerNoScan.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergStorageHandlerNoScan.java
@@ -34,6 +34,7 @@ import java.util.stream.Collectors;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.api.EnvironmentContext;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
@@ -408,6 +409,7 @@ public class TestHiveIcebergStorageHandlerNoScan {
 "'" + InputFormatConfig.PARTITION_SPEC + "'='" +
 PartitionSpecParser.toJson(PartitionSpec.unpartitioned()) + "', " +
 "'dummy'='test', " +
+"'" + InputFormatConfig.EXTERNAL_TABLE_PURGE + "'='TRUE', " +
 "'" + InputFormatConfig.CATALOG_NAME + "'='" + 
testTables.catalogName() + "')");
 
 // Check the Iceberg table data
@@ -465,7 +467,7 @@ public class TestHiveIcebergStorageHandlerNoScan {
 " last_name STRING COMMENT 'This is last name')" +
 " STORED BY ICEBERG " +
 testTables.locationForCreateTableSQL(identifier) +
-testTables.propertiesForCreateTableSQL(ImmutableMap.of());
+
testTables.propertiesForCreateTableSQL(ImmutableMap.of(InputFormatConfig.EXTERNAL_TABLE_PURGE,
 "TRUE"));
 shell.executeStatement(createSql);
 
 Table icebergTable = testTables.loadTable(identifier);
@@ -784,7 +786,8 @@ public class TestHiveIcebergSt

svn commit: r53453 - /release/hive/hive-4.0.0-alpha-1/

2022-03-30 Thread pvary
Author: pvary
Date: Wed Mar 30 11:16:26 2022
New Revision: 53453

Log:
Hive 4.0.0-alpha-1 release.

Added:
release/hive/hive-4.0.0-alpha-1/
release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.asc
release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.sha256
release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz   
(with props)
release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.asc
release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.sha256

Added: release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.asc
==
--- release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.asc 
(added)
+++ release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.asc 
Wed Mar 30 11:16:26 2022
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEEkH97dYmX9gu6bM3moUZzH9azxfoFAmI8O1kACgkQoUZzH9az
+xfrWoBAAsf2EOngd/mrI1hf0UlCpm9GR8vEein/iQpRSOYNk2j1HMFtyYtrTDtHO
+Zlaf+XV2LnGWdqGg/xZqpgA6PEPGr9JMo053lTDf0Y2camVRrdG8MlhjuloeRen6
+rDhNluTO9CIzVzbkNcIaRNTqS00iCHodrnQUCMOsWbBQXkO6gfFn3emCCwbfIYEk
+606NrZG0YBZ5qb7qLhyyMbYk+oB3qp38vmg9Z6YJV3nNq4sXn5lvUiOccjyDUWFK
+Tcp1Shly2yGNiJkgcm3ITm0A2hejEyQyD0MkKOd2FQaQLTU2fMxi3uplzB1hiWcO
+t7h7z84K65MGdKG9FZQAY+2wHgM9aP4hnJPb3u5TFVMmY2WQlSgKKbMtRT1wRjio
+n3dyKUY7igBWucF6QPtVbRSYSrlObD0NK2+/WzEPV3QfmNDKe/k7DyJ2WRcWHSFE
+U/teFTGZzAuhWXIvrdxK7yGFqw1E3g8m2f8/g4q1zeKFcfm7F2EGJVPhBC8NMjnu
+ksVlsysrcNJlRNx4TOJF9uzgoLYay2MEsg1ghE8NlLT+R4Wka8dKhhipRwITAX/e
+VtnlhK2Ymn1SHij/jG720Hk3Za8Uh929IUHoLIkCLl5AqoHfr3n25bkpbi8nfpYI
+jo2LWMdh971TQKQgBK1EnBvhXb72iLVC7IAD5w4+H1Z1SMRvFfY=
+=k711
+-END PGP SIGNATURE-

Added: 
release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.sha256
==
--- release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.sha256 
(added)
+++ release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-bin.tar.gz.sha256 
Wed Mar 30 11:16:26 2022
@@ -0,0 +1 @@
+1e450197dbf847696b05042eb68b78b968064f1f1b369a7fb0b77a6329a27809  
apache-hive-4.0.0-alpha-1-bin.tar.gz

Added: release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz
==
Binary file - no diff available.

Propchange: release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.asc
==
--- release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.asc 
(added)
+++ release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.asc 
Wed Mar 30 11:16:26 2022
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEEkH97dYmX9gu6bM3moUZzH9azxfoFAmI8O28ACgkQoUZzH9az
+xfp0Mw//audCWztkNBXXfBCRi/wuu5H1Xsq71CnwFfzg2kefSl6Mx02AAk3bUZ7L
+ePb5lPQ023c0g8/e5aiYv1kmYz+PQ80qgRhERYWi+/z6DRhZvtGGl5e9LJagQbFX
+DWFP5UwF+ns6LR5zAQlMhqUD9ySab3allz2UaK+Bo2R2pBWMowZribq9rFGXRD1E
+Vk2S/7zhTjYx5m5tga34lzdocvFJnQ4AV9PdxOTC0G0BQAGGf7/4cFB3lnOpWwjf
+GUY1lmWyd4CfznyuuohmASofcJ3/v9RAFTFeH9HNZt8MWNU4Ta0qDn/kecLfY0tl
+HM3uqHFznvZJRljfUGpajuiJzNWVgq7yBW72B069IsiJiEqaYPKRpHuIaFZNjSvH
+pUPeHG3Qs7to6j/3e2HPtvlzJ47+AjYn2uUSY0iaDmwBoM6ujL5p5zS2O51n1p/o
+Ka0B4v7b7K9rlLboBe9GuNS7mA6x43AjRTQCDGqd/xhN4k7U6ZoOnngkuffVa+cO
+pgHhwV/hhUVAy38bAj+7KYQqNBAaJ59fnANAzq6ZEVr0owXrwaaSNDK5+QNQFsXf
+oYJhReGBDTstRufWXUKFhDTVHsz0Qew+1x6KsC1ILoOUWJws5HjBf1C4+46Sp5uW
+dARSDiNVwQLTpT57j5zuK21a9HdE/lay2hoMguSyggyxvCwVRx0=
+=v6mU
+-END PGP SIGNATURE-

Added: 
release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.sha256
==
--- release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.sha256 
(added)
+++ release/hive/hive-4.0.0-alpha-1/apache-hive-4.0.0-alpha-1-src.tar.gz.sha256 
Wed Mar 30 11:16:26 2022
@@ -0,0 +1 @@
+a21a609ec2e30f8cc656242c545bb3a04de21c2a1eee90808648e3aa4bf3d04e  
apache-hive-4.0.0-alpha-1-src.tar.gz




[hive] annotated tag release-4.0.0-alpha-1-rc1 deleted (was 68e6d3e)

2022-03-30 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to annotated tag release-4.0.0-alpha-1-rc1
in repository https://gitbox.apache.org/repos/asf/hive.git.


*** WARNING: tag release-4.0.0-alpha-1-rc1 was deleted! ***

   tag was  68e6d3e

The revisions that were on this annotated tag are still contained in
other references; therefore, this change does not discard any commits
from the repository.


[hive] annotated tag rel/release-4.0.0-alpha-1 created (now 915af5e)

2022-03-30 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to annotated tag rel/release-4.0.0-alpha-1
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at 915af5e  (tag)
 tagging 68e6d3e10b823ec36231ec8cfcb78d2ce9b24467 (tag)
  length 179 bytes
  by Peter Vary
  on Wed Mar 30 12:24:18 2022 +0200

- Log -
Hive 4.0.0-alpha-1 release.
-BEGIN PGP SIGNATURE-

iQIzBAABCAAdFiEEkH97dYmX9gu6bM3moUZzH9azxfoFAmJEL9IACgkQoUZzH9az
xfqQ0g//U1w+7Vp075CG1Suo4UOgN8euyowbGvQcVZ2wltznZgU+WeiBUbOKoxTL
3aPliDDeW1H3R6vsbRElEszonmDs7bWbd4sUCyA9TCFIgLVGgZ8ejmL7oNn/GqPT
qN6P8fI/L8l1UWTKfZ7V3uJWt1aWmEwU2kyacDRS+E8mPFE2dPbrsctSu4NUm1GJ
A6x17JP1Rhx1HP4YpAGtrwOzIgcTS1/NDFOZuAPDtMgOehB+kJzdR47k0BLp2SL3
V+8YDx+myfyfzMIVJGx2nIGiqFK5vmR9RzRIFgK1NBtmbt3HbmcTegA/FTM0FqCB
DIMgkqtu/AYhVc6UZT8u2QaMBJy6Ns27pkEMC7a+PYnuSFn6fPvyHBUos0SN1/ro
Wy2g6lMBfWTdUxZtCjiHpUx1IHzfhd9FEsO2WIKdNgReNWoh/X7JH3I8jpPU+KdH
nY5SkoRTCGE73tIOkyHyij/DiP2Ld0eqa3a2yycWh1z1lifVQNqPk0FQu/+yOCM8
jcvp4TQaXr6s85S1fSFYtsTDOUUditVqQQm3WmV+3hfoijynIfQG9C+rhTp38OSv
hd7IiHBVRaBvPpFpheivXaubunyPWjv5EZnEzM/gsayWBE/jXIdmYyvjdqbx/lJz
NSxgDv8O3R770y/P3O0xv/EGe/vz/ipPqXiozPOg5zLzHYx2Grw=
=MM2C
-END PGP SIGNATURE-
---

No new revisions were added by this update.


[hive] branch master updated: Disable fkaly test: TestAsyncPbRpcProxy.testSingleInvocationPerNode

2022-03-30 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new cc6c02d  Disable fkaly test: 
TestAsyncPbRpcProxy.testSingleInvocationPerNode
cc6c02d is described below

commit cc6c02da94e0dd65dd1501ce3ca5d37980805cfd
Author: Peter Vary 
AuthorDate: Wed Mar 30 10:49:27 2022 +0200

Disable fkaly test: TestAsyncPbRpcProxy.testSingleInvocationPerNode
---
 .../src/test/org/apache/hadoop/hive/llap/TestAsyncPbRpcProxy.java| 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/llap-client/src/test/org/apache/hadoop/hive/llap/TestAsyncPbRpcProxy.java 
b/llap-client/src/test/org/apache/hadoop/hive/llap/TestAsyncPbRpcProxy.java
index 9ae..f71e5d3 100644
--- a/llap-client/src/test/org/apache/hadoop/hive/llap/TestAsyncPbRpcProxy.java
+++ b/llap-client/src/test/org/apache/hadoop/hive/llap/TestAsyncPbRpcProxy.java
@@ -61,6 +61,7 @@ public class TestAsyncPbRpcProxy {
 assertEquals(0, requestManager.currentLoopDisabledNodes.size());
   }
 
+  @org.junit.Ignore("HIVE-26089")
   @Test(timeout = 5000)
   public void testSingleInvocationPerNode() throws Exception {
 RequestManagerForTest requestManager = new RequestManagerForTest(1);


[hive] branch master updated (bf84d8a -> 73c54c0)

2022-03-29 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from bf84d8a  HIVE-26081: Upgrade ant to 1.10.9 (Ashish Sharma, reviewed by 
Adesh Rao)
 add 73c54c0  HIVE-26077: Implement CTAS for Iceberg tables with partition 
spec (Peter Vary reviewed by Marton Bod) (#3147)

No new revisions were added by this update.

Summary of changes:
 .../apache/iceberg/mr/hive/HiveIcebergSerDe.java   | 12 +++--
 .../iceberg/mr/hive/HiveIcebergTestUtils.java  |  7 ++-
 .../iceberg/mr/hive/TestHiveIcebergCTAS.java   | 62 ++
 .../org/apache/iceberg/mr/hive/TestTables.java |  4 +-
 4 files changed, 79 insertions(+), 6 deletions(-)


[hive] branch master updated: Disable flaky test TestParseDriver.testExoticSJSSubQuery

2022-03-29 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 977bfd2  Disable flaky test TestParseDriver.testExoticSJSSubQuery
977bfd2 is described below

commit 977bfd2a02dfb351c3a33c75e2ad26d183a3bb96
Author: Peter Vary 
AuthorDate: Tue Mar 29 09:21:01 2022 +0200

Disable flaky test TestParseDriver.testExoticSJSSubQuery
---
 parser/src/test/org/apache/hadoop/hive/ql/parse/TestParseDriver.java | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/parser/src/test/org/apache/hadoop/hive/ql/parse/TestParseDriver.java 
b/parser/src/test/org/apache/hadoop/hive/ql/parse/TestParseDriver.java
index 1aa1a40..672bd88 100644
--- a/parser/src/test/org/apache/hadoop/hive/ql/parse/TestParseDriver.java
+++ b/parser/src/test/org/apache/hadoop/hive/ql/parse/TestParseDriver.java
@@ -272,6 +272,7 @@ public class TestParseDriver {
 }
   }
 
+  @org.junit.Ignore("HIVE-26083")
   @Test(timeout = 1)
   public void testExoticSJSSubQuery() throws Exception {
 ExoticQueryBuilder eqb = new ExoticQueryBuilder();


[hive] branch master updated: HIVE-26061: Do not add 'from deserializer' comment upon alter commands for Iceberg tables (Peter Vary reviewed by Marton Bod) (#3129)

2022-03-28 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new b05bc6e  HIVE-26061: Do not add 'from deserializer' comment upon alter 
commands for Iceberg tables (Peter Vary reviewed by Marton Bod) (#3129)
b05bc6e is described below

commit b05bc6e33d4dd253cf00d8f1c28bb0b2a8f50750
Author: pvary 
AuthorDate: Mon Mar 28 09:52:15 2022 +0200

HIVE-26061: Do not add 'from deserializer' comment upon alter commands for 
Iceberg tables (Peter Vary reviewed by Marton Bod) (#3129)
---
 .../org/apache/iceberg/hive/TestHiveMetastore.java |   3 +-
 .../mr/hive/TestHiveIcebergSchemaEvolution.java|  16 +-
 .../hive/TestHiveIcebergStorageHandlerNoScan.java  |  33 ++-
 .../test/queries/positive/llap_iceberg_read_orc.q  |   4 +-
 .../alter_multi_part_table_to_iceberg.q.out|  18 +-
 .../positive/alter_part_table_to_iceberg.q.out |  12 +-
 .../results/positive/alter_table_to_iceberg.q.out  |  12 +-
 .../results/positive/create_iceberg_table.q.out|   8 +-
 ...create_iceberg_table_stored_as_fileformat.q.out |  40 ++--
 .../create_iceberg_table_stored_by_iceberg.q.out   |   8 +-
 ...le_stored_by_iceberg_with_serdeproperties.q.out |   8 +-
 .../describe_iceberg_metadata_tables.q.out | 228 ++---
 .../results/positive/describe_iceberg_table.q.out  |  50 ++---
 .../positive/llap/llap_iceberg_read_orc.q.out  |   8 +-
 .../llap/vectorized_iceberg_read_mixed.q.out   |   8 +-
 .../llap/vectorized_iceberg_read_orc.q.out |   8 +-
 .../llap/vectorized_iceberg_read_parquet.q.out |   8 +-
 .../positive/show_create_iceberg_table.q.out   |  42 ++--
 .../positive/truncate_force_iceberg_table.q.out|   8 +-
 .../results/positive/truncate_iceberg_table.q.out  |  16 +-
 .../truncate_partitioned_iceberg_table.q.out   |   8 +-
 .../positive/vectorized_iceberg_read_mixed.q.out   |   8 +-
 .../positive/vectorized_iceberg_read_orc.q.out |   8 +-
 .../positive/vectorized_iceberg_read_parquet.q.out |   8 +-
 .../hive/serde2/TestSerdeWithFieldComments.java|   3 +-
 .../hadoop/hive/metastore/HiveMetaStoreUtils.java  |  17 +-
 .../hive/metastore/SerDeStorageSchemaReader.java   |   2 +-
 .../java/org/apache/hadoop/hive/ql/Compiler.java   |   3 +-
 .../update/AlterTableUpdateColumnsOperation.java   |   3 +-
 .../ql/ddl/table/info/desc/DescTableOperation.java |   8 +-
 .../storage/serde/AlterTableSetSerdeOperation.java |   2 +-
 .../org/apache/hadoop/hive/ql/metadata/Hive.java   |  20 +-
 .../apache/hadoop/hive/ql/metadata/Partition.java  |   5 +-
 .../org/apache/hadoop/hive/ql/metadata/Table.java  |   5 +-
 .../hadoop/hive/metastore/conf/MetastoreConf.java  |   4 +
 35 files changed, 325 insertions(+), 317 deletions(-)

diff --git 
a/iceberg/iceberg-catalog/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
 
b/iceberg/iceberg-catalog/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
index 0d86d6f..a956d36 100644
--- 
a/iceberg/iceberg-catalog/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
+++ 
b/iceberg/iceberg-catalog/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.hive.metastore.IHMSHandler;
 import org.apache.hadoop.hive.metastore.IMetaStoreClient;
 import org.apache.hadoop.hive.metastore.RetryingHMSHandler;
 import org.apache.hadoop.hive.metastore.TSetIpAddressProcessor;
+import org.apache.hadoop.hive.metastore.api.GetTableRequest;
 import org.apache.hadoop.hive.metastore.api.Table;
 import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 import org.apache.hadoop.hive.metastore.utils.TestTxnDbUtil;
@@ -196,7 +197,7 @@ public class TestHiveMetastore {
   }
 
   public Table getTable(String dbName, String tableName) throws TException, 
InterruptedException {
-return clientPool.run(client -> client.getTable(dbName, tableName));
+return clientPool.run(client -> client.getTable(new 
GetTableRequest(dbName, tableName)));
   }
 
   public Table getTable(TableIdentifier identifier) throws TException, 
InterruptedException {
diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSchemaEvolution.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSchemaEvolution.java
index 076d6af..fdf9392 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSchemaEvolution.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSchemaEvolution.java
@@ -58,7 +58,7 @@ public class TestHiveIcebergSchemaEvolution extends 
HiveIcebergStorageHandlerWit
 
Assert.assertEquals(HiveIcebergStorageHandlerTestUtils.CUSTOMER_SCHEMA.columns().size(),
 rows.size());
 for (int i = 0; i < 
HiveIcebergStorageHandlerTestUtils.CUSTOMER_SCHEMA.columns().

[hive] branch master updated: HIVE-26069: Remove unnecessary items from the .gitignore (Peter Vary reviewed by Stamatis Zampetakis) (#3139)

2022-03-25 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 4684ab7  HIVE-26069: Remove unnecessary items from the .gitignore  
(Peter Vary reviewed by  Stamatis Zampetakis) (#3139)
4684ab7 is described below

commit 4684ab72e1d6220bef056a6dfa4ff0fb0ed92b76
Author: pvary 
AuthorDate: Sat Mar 26 06:29:43 2022 +0100

HIVE-26069: Remove unnecessary items from the .gitignore  (Peter Vary 
reviewed by  Stamatis Zampetakis) (#3139)
---
 .gitignore| 15 ---
 standalone-metastore/metastore-server/pom.xml |  2 ++
 2 files changed, 2 insertions(+), 15 deletions(-)

diff --git a/.gitignore b/.gitignore
index 83859c9..b421ed49 100644
--- a/.gitignore
+++ b/.gitignore
@@ -9,35 +9,20 @@ build-eclipse
 *.launch
 *.metadata
 *~
-metastore_db
 common/src/gen
 .idea
 *.iml
 *.ipr
 *.iws
 *.swp
-derby.log
-datanucleus.log
 .arc
-TempStatsStore/
 target/
-ql/TempStatsStore
-hcatalog/hcatalog-pig-adapter/target
-hcatalog/server-extensions/target
-hcatalog/core/target
-hcatalog/webhcat/java-client/target
-hcatalog/storage-handlers/hbase/target
-hcatalog/webhcat/svr/target
 conf/hive-default.xml.template
-itests/hive-blobstore/src/test/resources/blobstore-conf.xml
 .DS_Store
 .factorypath
-patchprocess
 standalone-metastore/src/gen/version
 standalone-metastore/metastore-common/src/gen/version
 standalone-metastore/metastore-server/src/gen/version
-launch.json
-settings.json
 kafka-handler/src/test/gen
 **/.vscode/
 /.recommenders/
diff --git a/standalone-metastore/metastore-server/pom.xml 
b/standalone-metastore/metastore-server/pom.xml
index 367bb79..45322e0 100644
--- a/standalone-metastore/metastore-server/pom.xml
+++ b/standalone-metastore/metastore-server/pom.xml
@@ -573,6 +573,8 @@
 ${test.tmp.dir}
 ${test.tmp.dir}
 true
+${derby.version}
+
${test.tmp.dir}/derby.log
   
   
 
${log4j.conf.dir}


[hive] branch master updated: HIVE-26070: Remove the generated files from the source tarball (Peter Vary reviewed by Stamatis Zampetakis) (#3141)

2022-03-25 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 43c41d0  HIVE-26070: Remove the generated files from the source 
tarball (Peter Vary reviewed by  Stamatis Zampetakis) (#3141)
43c41d0 is described below

commit 43c41d0505089f780557e2abf965ed67101b2467
Author: pvary 
AuthorDate: Sat Mar 26 06:27:59 2022 +0100

HIVE-26070: Remove the generated files from the source tarball (Peter Vary 
reviewed by  Stamatis Zampetakis) (#3141)
---
 packaging/src/main/assembly/src.xml | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/packaging/src/main/assembly/src.xml 
b/packaging/src/main/assembly/src.xml
index 8fc5bdc..d300816 100644
--- a/packaging/src/main/assembly/src.xml
+++ b/packaging/src/main/assembly/src.xml
@@ -41,6 +41,12 @@
 **/.project
 **/.settings/**
 **/thirdparty/**
+
standalone-metastore/metastore-common/src/gen/version/**
+standalone-metastore/metastore-server/src/gen/**
+common/src/gen/**
+kafka-handler/src/test/gen/**
+conf/hive-default.xml.template
+**/dependency-reduced-pom.xml
   
 
   


[hive] branch master updated: Disable flaky test

2022-03-25 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 5d8dae8  Disable flaky test
5d8dae8 is described below

commit 5d8dae8c37d7e21514db8b74e7ebf74a99ef1df1
Author: Peter Vary 
AuthorDate: Fri Mar 25 07:46:43 2022 +0100

Disable flaky test
---
 .../java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
index b630f7d..c43cd45 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
@@ -4799,6 +4799,7 @@ public class TestReplicationScenarios {
 appender.removeFromLogger(logger.getName());
   }
 
+  @org.junit.Ignore("HIVE-26073")
   @Test
   public void testIncrementalStatisticsMetrics() throws Throwable {
 isMetricsEnabledForTests(true);


svn commit: r53278 - /release/hive/KEYS

2022-03-22 Thread pvary
Author: pvary
Date: Tue Mar 22 11:47:32 2022
New Revision: 53278

Log:
Put Peter's key to the bottom of the file

Modified:
release/hive/KEYS

Modified: release/hive/KEYS
==
--- release/hive/KEYS (original)
+++ release/hive/KEYS Tue Mar 22 11:47:32 2022
@@ -1,61 +1,3 @@
-pub   rsa4096 2022-03-16 [SC]
-  907F7B758997F60BBA6CCDE6A146731FD6B3C5FA
-uid   [ultimate] Peter Vary (CODE SIGNING KEY) 
-sub   rsa4096 2022-03-16 [E]
-
--BEGIN PGP PUBLIC KEY BLOCK-
-
-mQINBGIx1t4BEADHTFKi7KFNadGQuTEBjZj9pljASz4DjruO7Ix2R9KnA2brbN4A
-BeYVm4phJ+G/nUF5wbtf44X6PJjT+hpkyTse0pRKyyDGEz5f9bUfyH5X6HAzroHq
-penBLxmuJcjQzeQDozznR+jN9i+87LnMAwDZP6CKmqKghObkdh52MOkiBEvkrk1n
-/okQDoXxuJJlrMPxQaKL303JBOb+RYa8DSmuIWFZK5+vW+yHGF081Y1HJylZ4sdd
-IwAjRXU91BSf52j/4hiR0tJXZiWE6ATwWvRVErSO79bjBaBqWqzNjVs+rccJ1Jhz
-ZqpGzSgpfzbJoapU5ku+tYKqVJH30wi+JiSr8zqXT+mbj+hZdMu8yZ3ck1rvi2Q7
-tuhuC6Kzk9V4n7N0RiegwH3jDi7T1aKuDGo1SgDyQAFZXiKHMEjRtDJ8UoYnfFM6
-P1NK5sqNreXs0L3OdsqfjtHR14l88ThhTLxM418h9tEWKc0tOEspOQbw7y5GJegr
-973NFyWRybV9fGHRRX8UF3BGgbYee6Lh09LlBN1EW+N5m0+SGNEjuJIcu8fX/EKu
-bGOXmqtm55fFZHcAaatbtkQhwYomk9tJKy9d+MZ5WKXuRhp27ih5w4o/qWTH1peg
-d9+DbDfEetAXX3pVT4rqA7oiGZj0VFJ+jyo7wK4gzP93cRP7lpAIYgtl3wARAQAB
-tDBQZXRlciBWYXJ5IChDT0RFIFNJR05JTkcgS0VZKSA8cHZhcnlAYXBhY2hlLm9y
-Zz6JAlIEEwEIADwWIQSQf3t1iZf2C7pszeahRnMf1rPF+gUCYjHW3gIbAwULCQgH
-AgMiAgEGFQoJCAsCBBYCAwECHgcCF4AACgkQoUZzH9azxfqq5BAAqfZjsxaQRe/Z
-SirCbtguAGy5w+ZtluqIIfudW99qWIIERinn6hZv0lHktCni20AU96h8H9guVhfT
-U8T3uCdH1wjkV67f29hnqCobWlCCrKhfIRtSBs+eJx+GBJD2sC8ExTrVX7X0SpJA
-C8EyUCNM/zyM62SxGq/8zyfJyUhgJ7lY8nTXDnuN0KP/C7plLpHq9C0jeX6P0ioo
-PdSCf2NXR3iu2dsEDsJI3NjA4HD4gSr1gsDThjQncZmWOViyLfNYA0CeGvWjFw2d
-j9OnoNXtLOtq8OIUBha/cjMkTe0DZUfMOCBFKMcclhP3FAwtTa6vsa/gwy3kVdbG
-uNV79tRwITTuk4yagbyczhZgQpfWkwzmCMLN46dN7zTGH+eUFKPuMp79zpm+2KVt
-PpyM8YoHhqypEU7SuoJURXK+b+tL5iScF6bu4dh+WIjFo8KZwyU+r+L30XOGAmIz
-s2jvfnEQDSb4Y8B8bwTt66rx3hJPTyP7C2BPbAJtRldPuYcWeBISCh46+zYD+Txi
-bMV8oS2mthZPuC7E/dTMcFLIECJOIgDajNKhwXRND7JxRn7JSm1qiwZzkbOj0BfU
-EsM8dHkT2DOP3FkSSBYPnElyHaHLPqpg5Tcqk3W+ZCUD+XCZgobVtcoOZrb3VYRL
-hx7KZNQyADe4jzE+Zm3J0bOJ6ky6fzS5Ag0EYjHW3gEQAM8/8N6usYb/0iqj1VCS
-pGTJ73jybPoQShTBAreg3RKzIOYqDaoa3LmfMOpFA8uMzYYMJwIK16jxiuhCZkJO
-tvk0WVsbrsc6BuLwII4aMLpIGKi8QUZoUA8dWeq1YjmqBZKjMgJ+5mr8bYuqeEEE
-cd5Tfo1CEgjmbx+UseSUQB3ediQpbr6v7vL4QcIjOQ7KGU5PgZENjdeZty35lbSh
-6N4iM8k4LXRCjQwpvqsR2lLW7qNg/uqrKETOFxJdgEpYB4Pr3QKmkNkvwso1YENi
-zlfWmJ6key9rlQxtEAWDf46I5k7hcmHnk07UH/AgBkCZ9ss/wpjpTCGEw5rGO4zi
-KaX35IxJiV4sqB5tJQumHduXR58Rv4tgQ2DPG8r47KnaaNN25E4u0n6Rs006i6/s
-JkALbH6qV99seTXTvGFtuXqlL7UcO17qpLXQLMLKAPpHPcTy7vHYwDzANL6/Prex
-9JikrmcAw9RxTWipIespzzEIgRUvBnGTCMLBpMlTBrRc+ims/ZuszusyXYN4J2Ij
-36+0/LZTJB/3lRDLjkyD8eVgXJGwWrDRLjHSPe6g1tj0SG1CeLbH2eh7KwBXLasJ
-BIEGcWFfh3oWnEDRz7ACnGy5RvCeuEGPtwbivUvOj3L+5fzAQbwHBi17ZuOs2dgH
-P7aXcytcfX1cSODfCh3qPyOHABEBAAGJAjYEGAEIACAWIQSQf3t1iZf2C7pszeah
-RnMf1rPF+gUCYjHW3gIbDAAKCRChRnMf1rPF+rPTD/49hVbvmW3w4nHAmuQA8nHU
-fMJ5gOdWB0iyxDbZi544piVYSL9K6ZCZraioj3jNUAdtp78sOIZvr9382iN2duB3
-CLdmKBoLWWUavNXffDN/OkNujJGVm28kYD45HgUH0dmm3LgWJxllp73TJZ1vH3ri
-Efzl4DdZIoi/NdkyDPJ2jA6MIxUYwcIpb8PefwlkB2DepZJiYtVKdJtN2MK7Pxx2
-GtIlSQm4lkn7go1hKNJhHgF5HkWktjqIERDwfTjfgf8dpMY7/yzVGDlY8YLRa+D9
-CjFpuqrRyeG8PueChUugVf/JkIQ600vqaiUDZXeRoQdqVutZxFUqQXbDgfcBfH8R
-D/QNaIpC+Yh/TQ1C0CmswollqFCUcplP3Crv5jcMDiVZnZMrDqZrrvHYp8tKHPTC
-hgFudaebFJ7siKQs5Njo5JZCM4BdAxdTeO9Z8CjZuJwkEE35RPAlomYvzAhpBo5w
-DBRWGaE208OzbGjP+d9e5QJ7lnovcXqXiAx/nXh4ZffAoJFDhvrsOETjUCfpfluY
-XZf6azJ3k09hHqobWj5HZkfqkmBWH2Jtm2pbLSoPeOXGsdvED4DVPDR98Zy8PdYn
-MgEklFbB+PFCLvug8lFQLWOboCHihzWmS+tRN28ejMiMh1uWiqo/mcZH54eQBi3c
-0XIgCzp8AJBOSU4oto2Dtw==
-=XgjI
--END PGP PUBLIC KEY BLOCK-
-
 pub   1024D/2E765AE1 2009-05-06
 uid  Ashish Thusoo (CODE SIGNING KEY) 
 sig 32E765AE1 2009-05-06  Ashish Thusoo (CODE SIGNING KEY) 

@@ -1583,3 +1525,62 @@ snhufW8+VCMO+w/ZjoFCNpU5cff2bPFDrj7oyLGQ
 tUvO4BP/JuwHpWR4tFv6ZyCKLtEq13k/mkCFjMrtTx5b/Q==
 =2yVM
 -END PGP PUBLIC KEY BLOCK-
+pub   rsa4096 2022-03-16 [SC]
+  907F7B758997F60BBA6CCDE6A146731FD6B3C5FA
+uid   [ultimate] Peter Vary (CODE SIGNING KEY) 
+sig 3A146731FD6B3C5FA 2022-03-16  Peter Vary (CODE SIGNING KEY) 

+sub   rsa4096 2022-03-16 [E]
+sig  A146731FD6B3C5FA 2022-03-16  Peter Vary (CODE SIGNING KEY) 

+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBGIx1t4BEADHTFKi7KFNadGQuTEBjZj9pljASz4DjruO7Ix2R9KnA2brbN4A
+BeYVm4phJ+G/nUF5wbtf44X6PJjT+hpkyTse0pRKyyDGEz5f9bUfyH5X6HAzroHq
+penBLxmuJcjQzeQDozznR+jN9i+87LnMAwDZP6CKmqKghObkdh52MOkiBEvkrk1n
+/okQDoXxuJJlrMPxQaKL303JBOb+RYa8DSmuIWFZK5+vW+yHGF081Y1HJylZ4sdd
+IwAjRXU91BSf52j/4hiR0tJXZiWE6ATwWvRVErSO79bjBaBqWqzNjVs+rccJ1Jhz
+ZqpGzSgpfzbJoapU5ku+tYKqVJH30wi+JiSr8zqXT+mbj+hZdMu8yZ3ck1rvi2Q7
+tuhuC6Kzk9V4n7N0RiegwH3jDi7T1aKuDGo1SgDyQAFZXiKHMEjRtDJ8UoYnfFM6
+P1NK5sqNreXs0L3OdsqfjtHR14l88ThhTLxM418h9tEWKc0tOEspOQbw7y5GJegr
+973NFyWRybV9fGHRRX8UF3BGgbYee6Lh09LlBN1EW+N5m0+SGNEjuJIcu8fX

svn commit: r53277 - /release/hive/KEYS

2022-03-22 Thread pvary
Author: pvary
Date: Tue Mar 22 11:41:16 2022
New Revision: 53277

Log:
Add Peter Vary's key

Modified:
release/hive/KEYS

Modified: release/hive/KEYS
==
--- release/hive/KEYS (original)
+++ release/hive/KEYS Tue Mar 22 11:41:16 2022
@@ -1,3 +1,61 @@
+pub   rsa4096 2022-03-16 [SC]
+  907F7B758997F60BBA6CCDE6A146731FD6B3C5FA
+uid   [ultimate] Peter Vary (CODE SIGNING KEY) 
+sub   rsa4096 2022-03-16 [E]
+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBGIx1t4BEADHTFKi7KFNadGQuTEBjZj9pljASz4DjruO7Ix2R9KnA2brbN4A
+BeYVm4phJ+G/nUF5wbtf44X6PJjT+hpkyTse0pRKyyDGEz5f9bUfyH5X6HAzroHq
+penBLxmuJcjQzeQDozznR+jN9i+87LnMAwDZP6CKmqKghObkdh52MOkiBEvkrk1n
+/okQDoXxuJJlrMPxQaKL303JBOb+RYa8DSmuIWFZK5+vW+yHGF081Y1HJylZ4sdd
+IwAjRXU91BSf52j/4hiR0tJXZiWE6ATwWvRVErSO79bjBaBqWqzNjVs+rccJ1Jhz
+ZqpGzSgpfzbJoapU5ku+tYKqVJH30wi+JiSr8zqXT+mbj+hZdMu8yZ3ck1rvi2Q7
+tuhuC6Kzk9V4n7N0RiegwH3jDi7T1aKuDGo1SgDyQAFZXiKHMEjRtDJ8UoYnfFM6
+P1NK5sqNreXs0L3OdsqfjtHR14l88ThhTLxM418h9tEWKc0tOEspOQbw7y5GJegr
+973NFyWRybV9fGHRRX8UF3BGgbYee6Lh09LlBN1EW+N5m0+SGNEjuJIcu8fX/EKu
+bGOXmqtm55fFZHcAaatbtkQhwYomk9tJKy9d+MZ5WKXuRhp27ih5w4o/qWTH1peg
+d9+DbDfEetAXX3pVT4rqA7oiGZj0VFJ+jyo7wK4gzP93cRP7lpAIYgtl3wARAQAB
+tDBQZXRlciBWYXJ5IChDT0RFIFNJR05JTkcgS0VZKSA8cHZhcnlAYXBhY2hlLm9y
+Zz6JAlIEEwEIADwWIQSQf3t1iZf2C7pszeahRnMf1rPF+gUCYjHW3gIbAwULCQgH
+AgMiAgEGFQoJCAsCBBYCAwECHgcCF4AACgkQoUZzH9azxfqq5BAAqfZjsxaQRe/Z
+SirCbtguAGy5w+ZtluqIIfudW99qWIIERinn6hZv0lHktCni20AU96h8H9guVhfT
+U8T3uCdH1wjkV67f29hnqCobWlCCrKhfIRtSBs+eJx+GBJD2sC8ExTrVX7X0SpJA
+C8EyUCNM/zyM62SxGq/8zyfJyUhgJ7lY8nTXDnuN0KP/C7plLpHq9C0jeX6P0ioo
+PdSCf2NXR3iu2dsEDsJI3NjA4HD4gSr1gsDThjQncZmWOViyLfNYA0CeGvWjFw2d
+j9OnoNXtLOtq8OIUBha/cjMkTe0DZUfMOCBFKMcclhP3FAwtTa6vsa/gwy3kVdbG
+uNV79tRwITTuk4yagbyczhZgQpfWkwzmCMLN46dN7zTGH+eUFKPuMp79zpm+2KVt
+PpyM8YoHhqypEU7SuoJURXK+b+tL5iScF6bu4dh+WIjFo8KZwyU+r+L30XOGAmIz
+s2jvfnEQDSb4Y8B8bwTt66rx3hJPTyP7C2BPbAJtRldPuYcWeBISCh46+zYD+Txi
+bMV8oS2mthZPuC7E/dTMcFLIECJOIgDajNKhwXRND7JxRn7JSm1qiwZzkbOj0BfU
+EsM8dHkT2DOP3FkSSBYPnElyHaHLPqpg5Tcqk3W+ZCUD+XCZgobVtcoOZrb3VYRL
+hx7KZNQyADe4jzE+Zm3J0bOJ6ky6fzS5Ag0EYjHW3gEQAM8/8N6usYb/0iqj1VCS
+pGTJ73jybPoQShTBAreg3RKzIOYqDaoa3LmfMOpFA8uMzYYMJwIK16jxiuhCZkJO
+tvk0WVsbrsc6BuLwII4aMLpIGKi8QUZoUA8dWeq1YjmqBZKjMgJ+5mr8bYuqeEEE
+cd5Tfo1CEgjmbx+UseSUQB3ediQpbr6v7vL4QcIjOQ7KGU5PgZENjdeZty35lbSh
+6N4iM8k4LXRCjQwpvqsR2lLW7qNg/uqrKETOFxJdgEpYB4Pr3QKmkNkvwso1YENi
+zlfWmJ6key9rlQxtEAWDf46I5k7hcmHnk07UH/AgBkCZ9ss/wpjpTCGEw5rGO4zi
+KaX35IxJiV4sqB5tJQumHduXR58Rv4tgQ2DPG8r47KnaaNN25E4u0n6Rs006i6/s
+JkALbH6qV99seTXTvGFtuXqlL7UcO17qpLXQLMLKAPpHPcTy7vHYwDzANL6/Prex
+9JikrmcAw9RxTWipIespzzEIgRUvBnGTCMLBpMlTBrRc+ims/ZuszusyXYN4J2Ij
+36+0/LZTJB/3lRDLjkyD8eVgXJGwWrDRLjHSPe6g1tj0SG1CeLbH2eh7KwBXLasJ
+BIEGcWFfh3oWnEDRz7ACnGy5RvCeuEGPtwbivUvOj3L+5fzAQbwHBi17ZuOs2dgH
+P7aXcytcfX1cSODfCh3qPyOHABEBAAGJAjYEGAEIACAWIQSQf3t1iZf2C7pszeah
+RnMf1rPF+gUCYjHW3gIbDAAKCRChRnMf1rPF+rPTD/49hVbvmW3w4nHAmuQA8nHU
+fMJ5gOdWB0iyxDbZi544piVYSL9K6ZCZraioj3jNUAdtp78sOIZvr9382iN2duB3
+CLdmKBoLWWUavNXffDN/OkNujJGVm28kYD45HgUH0dmm3LgWJxllp73TJZ1vH3ri
+Efzl4DdZIoi/NdkyDPJ2jA6MIxUYwcIpb8PefwlkB2DepZJiYtVKdJtN2MK7Pxx2
+GtIlSQm4lkn7go1hKNJhHgF5HkWktjqIERDwfTjfgf8dpMY7/yzVGDlY8YLRa+D9
+CjFpuqrRyeG8PueChUugVf/JkIQ600vqaiUDZXeRoQdqVutZxFUqQXbDgfcBfH8R
+D/QNaIpC+Yh/TQ1C0CmswollqFCUcplP3Crv5jcMDiVZnZMrDqZrrvHYp8tKHPTC
+hgFudaebFJ7siKQs5Njo5JZCM4BdAxdTeO9Z8CjZuJwkEE35RPAlomYvzAhpBo5w
+DBRWGaE208OzbGjP+d9e5QJ7lnovcXqXiAx/nXh4ZffAoJFDhvrsOETjUCfpfluY
+XZf6azJ3k09hHqobWj5HZkfqkmBWH2Jtm2pbLSoPeOXGsdvED4DVPDR98Zy8PdYn
+MgEklFbB+PFCLvug8lFQLWOboCHihzWmS+tRN28ejMiMh1uWiqo/mcZH54eQBi3c
+0XIgCzp8AJBOSU4oto2Dtw==
+=XgjI
+-END PGP PUBLIC KEY BLOCK-
+
 pub   1024D/2E765AE1 2009-05-06
 uid  Ashish Thusoo (CODE SIGNING KEY) 
 sig 32E765AE1 2009-05-06  Ashish Thusoo (CODE SIGNING KEY) 





[hive] branch master updated: HIVE-26044: Remove hardcoded version references from the tests (Peter Vary reviewed by Marton Bod and Stamatis Zampetakis) (#3115)

2022-03-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new f2fbe72  HIVE-26044: Remove hardcoded version references from the 
tests (Peter Vary reviewed by Marton Bod and  Stamatis Zampetakis) (#3115)
f2fbe72 is described below

commit f2fbe72ab524f603d89e867b2831087c93805256
Author: pvary 
AuthorDate: Tue Mar 22 12:20:52 2022 +0100

HIVE-26044: Remove hardcoded version references from the tests (Peter Vary 
reviewed by Marton Bod and  Stamatis Zampetakis) (#3115)
---
 .../InformationSchemaWithPrivilegeTestBase.java|  6 -
 .../hadoop/hive/ql/qoption/QTestSysDbHandler.java  |  5 -
 .../metastore/txn/TestCompactionTxnHandler.java|  3 ++-
 .../hive/ql/txn/compactor/CompactorTest.java   |  3 ++-
 .../hadoop/hive/ql/txn/compactor/TestCleaner.java  |  2 +-
 .../ql/txn/compactor/TestCompactionMetrics.java| 26 +++---
 .../hive/ql/txn/compactor/TestInitiator.java   |  8 +++
 .../hadoop/hive/ql/txn/compactor/TestWorker.java   |  4 ++--
 .../metastore/dbinstall/rules/PostgresTPCDS.java   |  5 -
 9 files changed, 37 insertions(+), 25 deletions(-)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/service/server/InformationSchemaWithPrivilegeTestBase.java
 
b/itests/hive-unit/src/test/java/org/apache/hive/service/server/InformationSchemaWithPrivilegeTestBase.java
index e0f8947..9573e50 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hive/service/server/InformationSchemaWithPrivilegeTestBase.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/service/server/InformationSchemaWithPrivilegeTestBase.java
@@ -26,6 +26,7 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 
+import org.apache.hadoop.hive.metastore.MetaStoreSchemaInfoFactory;
 import org.apache.hive.testutils.MiniZooKeeperCluster;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hive.conf.HiveConf;
@@ -183,6 +184,7 @@ public abstract class 
InformationSchemaWithPrivilegeTestBase {
   private static MiniHS2 miniHS2 = null;
   private static MiniZooKeeperCluster zkCluster = null;
   private static Map confOverlay;
+  private static String hiveSchemaVer;
 
 
   public static void setupInternal(boolean zookeeperSSLEnabled) throws 
Exception {
@@ -228,6 +230,8 @@ public abstract class 
InformationSchemaWithPrivilegeTestBase {
   confOverlay.put(ConfVars.HIVE_ZOOKEEPER_SSL_ENABLE.varname, "true");
 }
 miniHS2.start(confOverlay);
+
+hiveSchemaVer = 
MetaStoreSchemaInfoFactory.get(miniHS2.getServerConf()).getHiveSchemaVersion();
   }
 
   @AfterClass
@@ -287,7 +291,7 @@ public abstract class 
InformationSchemaWithPrivilegeTestBase {
 
 List args = new ArrayList(baseArgs);
 args.add("-f");
-
args.add("../../metastore/scripts/upgrade/hive/hive-schema-4.0.0-alpha-1.hive.sql");
+args.add("../../metastore/scripts/upgrade/hive/hive-schema-" + 
hiveSchemaVer + ".hive.sql");
 BeeLine beeLine = new BeeLine();
 int result = beeLine.begin(args.toArray(new String[] {}), null);
 beeLine.close();
diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestSysDbHandler.java
 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestSysDbHandler.java
index 4834ad2..1ffacb6 100644
--- 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestSysDbHandler.java
+++ 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestSysDbHandler.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.hive.ql.qoption;
 
+import org.apache.hadoop.hive.metastore.MetaStoreSchemaInfoFactory;
 import org.apache.hadoop.hive.ql.QTestUtil;
 import org.apache.hive.testutils.HiveTestEnvSetup;
 import org.slf4j.Logger;
@@ -44,7 +45,9 @@ public class QTestSysDbHandler implements QTestOptionHandler {
   @Override
   public void beforeTest(QTestUtil qt) throws Exception {
 if (enabled) {
-  String stsdbPath = HiveTestEnvSetup.HIVE_ROOT + 
"/metastore/scripts/upgrade/hive/hive-schema-4.0.0-alpha-1.hive.sql";
+  String schemaVersion = 
MetaStoreSchemaInfoFactory.get(qt.getConf()).getHiveSchemaVersion();
+  String stsdbPath =
+  HiveTestEnvSetup.HIVE_ROOT + 
"/metastore/scripts/upgrade/hive/hive-schema-" + schemaVersion + ".hive.sql";
   qt.getCliDriver().processLine("source " + stsdbPath);
   qt.getCliDriver().processLine("use default");
 }
diff --git 
a/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestCompactionTxnHandler.java
 
b/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestCompactionTxnHandler.java
index b9d34d5..46879b0 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestCompactionTxnHandler.java
+++ 
b/ql/

[hive] annotated tag release-4.0.0-alpha-1-rc1 created (now 68e6d3e)

2022-03-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to annotated tag release-4.0.0-alpha-1-rc1
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at 68e6d3e  (tag)
 tagging 357d4906f5c806d585fd84db57cf296e12e6049b (commit)
  by Peter Vary
  on Tue Mar 22 11:16:08 2022 +0100

- Log -
Hive 4.0.0-alpha-1-rc1 release.
---

No new revisions were added by this update.


[hive] annotated tag release-4.0.0-alpha-1-rc1 deleted (was e82926b)

2022-03-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to annotated tag release-4.0.0-alpha-1-rc1
in repository https://gitbox.apache.org/repos/asf/hive.git.


*** WARNING: tag release-4.0.0-alpha-1-rc1 was deleted! ***

   tag was  e82926b

The revisions that were on this annotated tag are still contained in
other references; therefore, this change does not discard any commits
from the repository.


[hive] branch branch-4.0.0-alpha-1 updated (d3df69e -> 357d490)

2022-03-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch branch-4.0.0-alpha-1
in repository https://gitbox.apache.org/repos/asf/hive.git.


from d3df69e  Updating release notes
 add 357d490  Update README.md

No new revisions were added by this update.

Summary of changes:
 README.md | 1 +
 1 file changed, 1 insertion(+)


[hive] annotated tag release-4.0.0-alpha-1-rc1 created (now e82926b)

2022-03-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to annotated tag release-4.0.0-alpha-1-rc1
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at e82926b  (tag)
 tagging 357d4906f5c806d585fd84db57cf296e12e6049b (commit)
  by Peter Vary
  on Tue Mar 22 11:14:57 2022 +0100

- Log -
Hive 4.0.0-alpha-1-rc1 release.
---

This annotated tag includes the following new commits:

 new 357d490  Update README.md

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



[hive] 01/01: Update README.md

2022-03-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to annotated tag release-4.0.0-alpha-1-rc1
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 357d4906f5c806d585fd84db57cf296e12e6049b
Author: Peter Vary 
AuthorDate: Tue Mar 22 11:02:22 2022 +0100

Update README.md
---
 README.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/README.md b/README.md
index fe5c456..94af6ed 100644
--- a/README.md
+++ b/README.md
@@ -93,6 +93,7 @@ Hadoop
 
 - Hadoop 1.x, 2.x
 - Hadoop 3.x (Hive 3.x)
+- Hadoop 3.1 (Hive 4.x)
 
 
 Upgrading from older versions of Hive


[hive] 01/02: Preparing for 4.0.0-alpha-1 release

2022-03-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch branch-4.0.0-alpha-1
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 10148c1404bfb6271013611adc300cd63e1b941d
Author: Peter Vary 
AuthorDate: Tue Mar 22 09:48:08 2022 +0100

Preparing for 4.0.0-alpha-1 release
---
 accumulo-handler/pom.xml  | 2 +-
 beeline/pom.xml   | 2 +-
 classification/pom.xml| 2 +-
 cli/pom.xml   | 2 +-
 common/pom.xml| 2 +-
 contrib/pom.xml   | 2 +-
 druid-handler/pom.xml | 2 +-
 hbase-handler/pom.xml | 2 +-
 hcatalog/core/pom.xml | 2 +-
 hcatalog/hcatalog-pig-adapter/pom.xml | 4 ++--
 hcatalog/pom.xml  | 4 ++--
 hcatalog/server-extensions/pom.xml| 2 +-
 hcatalog/webhcat/java-client/pom.xml  | 2 +-
 hcatalog/webhcat/svr/pom.xml  | 2 +-
 hplsql/pom.xml| 2 +-
 iceberg/iceberg-catalog/pom.xml   | 2 +-
 iceberg/iceberg-handler/pom.xml   | 2 +-
 iceberg/iceberg-shading/pom.xml   | 2 +-
 iceberg/patched-iceberg-api/pom.xml   | 2 +-
 iceberg/patched-iceberg-core/pom.xml  | 2 +-
 iceberg/pom.xml   | 4 ++--
 itests/custom-serde/pom.xml   | 2 +-
 itests/custom-udfs/pom.xml| 2 +-
 itests/custom-udfs/udf-classloader-udf1/pom.xml   | 2 +-
 itests/custom-udfs/udf-classloader-udf2/pom.xml   | 2 +-
 itests/custom-udfs/udf-classloader-util/pom.xml   | 2 +-
 itests/custom-udfs/udf-vectorized-badexample/pom.xml  | 2 +-
 itests/hcatalog-unit/pom.xml  | 2 +-
 itests/hive-blobstore/pom.xml | 2 +-
 itests/hive-jmh/pom.xml   | 2 +-
 itests/hive-minikdc/pom.xml   | 2 +-
 itests/hive-unit-hadoop2/pom.xml  | 2 +-
 itests/hive-unit/pom.xml  | 2 +-
 itests/pom.xml| 2 +-
 itests/qtest-accumulo/pom.xml | 2 +-
 itests/qtest-druid/pom.xml| 2 +-
 itests/qtest-iceberg/pom.xml  | 2 +-
 itests/qtest-kudu/pom.xml | 2 +-
 itests/qtest-spark/pom.xml| 2 +-
 itests/qtest/pom.xml  | 2 +-
 itests/test-serde/pom.xml | 2 +-
 itests/util/pom.xml   | 2 +-
 jdbc-handler/pom.xml  | 2 +-
 jdbc/pom.xml  | 2 +-
 kafka-handler/pom.xml | 2 +-
 kryo-registrator/pom.xml  | 2 +-
 kudu-handler/pom.xml  | 2 +-
 llap-client/pom.xml   | 2 +-
 llap-common/pom.xml   | 2 +-
 llap-ext-client/pom.xml   | 2 +-
 llap-server/pom.xml   | 2 +-
 llap-tez/pom.xml  | 2 +-
 metastore/pom.xml | 2 +-
 packaging/pom.xml | 2 +-
 parser/pom.xml| 2 +-
 pom.xml   | 6 +++---
 ql/pom.xml| 2 +-
 serde/pom.xml | 2 +-
 service-rpc/pom.xml   | 2 +-
 service/pom.xml   | 2 +-
 shims/0.23/pom.xml| 2 +-
 shims/aggregator/pom.xml  | 2 +-
 shims/common/pom.xml

[hive] branch branch-4.0.0-alpha-1 created (now d3df69e)

2022-03-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch branch-4.0.0-alpha-1
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at d3df69e  Updating release notes

This branch includes the following new commits:

 new 10148c1  Preparing for 4.0.0-alpha-1 release
 new d3df69e  Updating release notes

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



[hive] branch master updated: HIVE-26049: Inconsistent TBL_NAME lengths in HMS schema (Janos Kovacs reviewed by Peter Vary) (#3119)

2022-03-21 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 1f49cb4  HIVE-26049: Inconsistent TBL_NAME lengths in HMS schema 
(Janos Kovacs reviewed by Peter Vary) (#3119)
1f49cb4 is described below

commit 1f49cb433151d60508ba0dd5042652a24c1b778b
Author: Janos Kovacs 
AuthorDate: Mon Mar 21 15:54:47 2022 +0100

HIVE-26049: Inconsistent TBL_NAME lengths in HMS schema (Janos Kovacs 
reviewed by Peter Vary) (#3119)
---
 .../main/sql/derby/hive-schema-4.0.0-alpha-1.derby.sql   | 14 +++---
 .../sql/derby/upgrade-3.2.0-to-4.0.0-alpha-1.derby.sql   |  9 +
 .../main/sql/mssql/hive-schema-4.0.0-alpha-1.mssql.sql   | 16 
 .../sql/mssql/upgrade-3.2.0-to-4.0.0-alpha-1.mssql.sql   | 12 
 .../main/sql/mysql/hive-schema-4.0.0-alpha-1.mysql.sql   | 14 +++---
 .../sql/mysql/upgrade-3.2.0-to-4.0.0-alpha-1.mysql.sql   |  9 +
 .../main/sql/oracle/hive-schema-4.0.0-alpha-1.oracle.sql | 14 +++---
 .../sql/oracle/upgrade-3.2.0-to-4.0.0-alpha-1.oracle.sql |  9 +
 .../sql/postgres/hive-schema-4.0.0-alpha-1.postgres.sql  | 14 +++---
 .../postgres/upgrade-3.2.0-to-4.0.0-alpha-1.postgres.sql |  9 +
 10 files changed, 84 insertions(+), 36 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0-alpha-1.derby.sql
 
b/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0-alpha-1.derby.sql
index a92ccf7..147983e 100644
--- 
a/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0-alpha-1.derby.sql
+++ 
b/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0-alpha-1.derby.sql
@@ -555,7 +555,7 @@ INSERT INTO TXNS (TXN_ID, TXN_STATE, TXN_STARTED, 
TXN_LAST_HEARTBEAT, TXN_USER,
 CREATE TABLE TXN_COMPONENTS (
   TC_TXNID bigint NOT NULL REFERENCES TXNS (TXN_ID),
   TC_DATABASE varchar(128) NOT NULL,
-  TC_TABLE varchar(128),
+  TC_TABLE varchar(256),
   TC_PARTITION varchar(767),
   TC_OPERATION_TYPE char(1) NOT NULL,
   TC_WRITEID bigint
@@ -585,7 +585,7 @@ CREATE TABLE HIVE_LOCKS (
   HL_LOCK_INT_ID bigint NOT NULL,
   HL_TXNID bigint NOT NULL,
   HL_DB varchar(128) NOT NULL,
-  HL_TABLE varchar(128),
+  HL_TABLE varchar(256),
   HL_PARTITION varchar(767),
   HL_LOCK_STATE char(1) NOT NULL,
   HL_LOCK_TYPE char(1) NOT NULL,
@@ -610,7 +610,7 @@ INSERT INTO NEXT_LOCK_ID VALUES(1);
 CREATE TABLE COMPACTION_QUEUE (
   CQ_ID bigint PRIMARY KEY,
   CQ_DATABASE varchar(128) NOT NULL,
-  CQ_TABLE varchar(128) NOT NULL,
+  CQ_TABLE varchar(256) NOT NULL,
   CQ_PARTITION varchar(767),
   CQ_STATE char(1) NOT NULL,
   CQ_TYPE char(1) NOT NULL,
@@ -641,7 +641,7 @@ INSERT INTO NEXT_COMPACTION_QUEUE_ID VALUES(1);
 CREATE TABLE COMPLETED_COMPACTIONS (
   CC_ID bigint PRIMARY KEY,
   CC_DATABASE varchar(128) NOT NULL,
-  CC_TABLE varchar(128) NOT NULL,
+  CC_TABLE varchar(256) NOT NULL,
   CC_PARTITION varchar(767),
   CC_STATE char(1) NOT NULL,
   CC_TYPE char(1) NOT NULL,
@@ -665,7 +665,7 @@ CREATE INDEX COMPLETED_COMPACTIONS_RES ON 
COMPLETED_COMPACTIONS (CC_DATABASE,CC_
 -- HIVE-25842
 CREATE TABLE COMPACTION_METRICS_CACHE (
   CMC_DATABASE varchar(128) NOT NULL,
-  CMC_TABLE varchar(128) NOT NULL,
+  CMC_TABLE varchar(256) NOT NULL,
   CMC_PARTITION varchar(767),
   CMC_METRIC_TYPE varchar(128) NOT NULL,
   CMC_METRIC_VALUE integer NOT NULL,
@@ -683,7 +683,7 @@ CREATE TABLE AUX_TABLE (
 --This is a good candidate for Index orgainzed table
 CREATE TABLE WRITE_SET (
   WS_DATABASE varchar(128) NOT NULL,
-  WS_TABLE varchar(128) NOT NULL,
+  WS_TABLE varchar(256) NOT NULL,
   WS_PARTITION varchar(767),
   WS_TXNID bigint NOT NULL,
   WS_COMMIT_ID bigint NOT NULL,
@@ -773,7 +773,7 @@ CREATE TABLE TXN_WRITE_NOTIFICATION_LOG (
   WNL_TXNID bigint NOT NULL,
   WNL_WRITEID bigint NOT NULL,
   WNL_DATABASE varchar(128) NOT NULL,
-  WNL_TABLE varchar(128) NOT NULL,
+  WNL_TABLE varchar(256) NOT NULL,
   WNL_PARTITION varchar(767) NOT NULL,
   WNL_TABLE_OBJ clob NOT NULL,
   WNL_PARTITION_OBJ clob,
diff --git 
a/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0-alpha-1.derby.sql
 
b/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0-alpha-1.derby.sql
index d38e20e..ed9b822 100644
--- 
a/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0-alpha-1.derby.sql
+++ 
b/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0-alpha-1.derby.sql
@@ -198,5 +198,14 @@ CREATE TABLE COMPACTION_METRICS_CACHE (
 -- HIVE-25993
 ALTER TABLE "APP"."COMPACTION_QUEUE" ADD COLUMN "CQ_RETRY_RETENTION" bigint 
NOT NULL DEFAULT 0;
 
+-- HIVE-26049
+ALTER TABLE "APP"."TXN_COMPONENTS" ALTER "TC_TABLE" SET DATA TYPE VARCHAR(256);
+A

[hive] branch master updated: HIVE-26016: Remove duplicate table exists check in create_table_core api of HMSHandler (Wechar Yu, reviewed by Rajesh Balamohan and Peter Vary) (#3085)

2022-03-21 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new cf06939  HIVE-26016: Remove duplicate table exists check in 
create_table_core api of HMSHandler (Wechar Yu, reviewed by Rajesh Balamohan 
and Peter Vary) (#3085)
cf06939 is described below

commit cf06939bc0b79d1f04f777f80de1f729d45456f0
Author: Wechar Yu <65108011+wecha...@users.noreply.github.com>
AuthorDate: Mon Mar 21 18:18:48 2022 +0800

HIVE-26016: Remove duplicate table exists check in create_table_core api of 
HMSHandler (Wechar Yu, reviewed by Rajesh Balamohan and Peter Vary) (#3085)
---
 .../src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java   | 5 -
 1 file changed, 5 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
index 6e8b85d..e458ae2 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
@@ -2335,11 +2335,6 @@ public class HMSHandler extends FacebookBase implements 
IHMSHandler {
   isReplicated = isDbReplicationTarget(db);
 
   firePreEvent(new PreCreateTableEvent(tbl, db, this));
-  // get_table checks whether database exists, it should be moved here
-  if (is_table_exists(ms, tbl.getCatName(), tbl.getDbName(), 
tbl.getTableName())) {
-throw new AlreadyExistsException("Table " + 
getCatalogQualifiedTableName(tbl)
-+ " already exists");
-  }
 
   if (!TableType.VIRTUAL_VIEW.toString().equals(tbl.getTableType())) {
 if (tbl.getSd().getLocation() == null


[hive] branch master updated: HIVE-26042: Fix flaky streaming tests (Peter Vary, reviewed by Marton Bod) (#3114)

2022-03-21 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 168272e  HIVE-26042: Fix flaky streaming tests (Peter Vary, reviewed 
by Marton Bod) (#3114)
168272e is described below

commit 168272eaa0898db79f2444c92a75d1fd9c4cb6ec
Author: pvary 
AuthorDate: Mon Mar 21 09:36:44 2022 +0100

HIVE-26042: Fix flaky streaming tests (Peter Vary, reviewed by Marton Bod) 
(#3114)
---
 streaming/src/test/org/apache/hive/streaming/TestStreaming.java| 3 ++-
 .../org/apache/hive/streaming/TestStreamingDynamicPartitioning.java| 2 ++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/streaming/src/test/org/apache/hive/streaming/TestStreaming.java 
b/streaming/src/test/org/apache/hive/streaming/TestStreaming.java
index ce0b370..d8b5e0e 100644
--- a/streaming/src/test/org/apache/hive/streaming/TestStreaming.java
+++ b/streaming/src/test/org/apache/hive/streaming/TestStreaming.java
@@ -72,6 +72,7 @@ import 
org.apache.hadoop.hive.metastore.api.TableValidWriteIds;
 import org.apache.hadoop.hive.metastore.api.TxnAbortedException;
 import org.apache.hadoop.hive.metastore.api.TxnInfo;
 import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 import org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService;
 import org.apache.hadoop.hive.metastore.txn.TxnCommonUtils;
 import org.apache.hadoop.hive.metastore.utils.TestTxnDbUtil;
@@ -216,7 +217,7 @@ public class TestStreaming {
 conf.setBoolVar(HiveConf.ConfVars.METASTORE_EXECUTE_SET_UGI, true);
 conf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, true);
 dbFolder.create();
-
+MetastoreConf.setVar(conf, MetastoreConf.ConfVars.WAREHOUSE, "raw://" + 
dbFolder.newFolder("warehouse"));
 
 //1) Start from a clean slate (metastore)
 TestTxnDbUtil.cleanDb(conf);
diff --git 
a/streaming/src/test/org/apache/hive/streaming/TestStreamingDynamicPartitioning.java
 
b/streaming/src/test/org/apache/hive/streaming/TestStreamingDynamicPartitioning.java
index 8dd632e..c548ea7 100644
--- 
a/streaming/src/test/org/apache/hive/streaming/TestStreamingDynamicPartitioning.java
+++ 
b/streaming/src/test/org/apache/hive/streaming/TestStreamingDynamicPartitioning.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.hive.cli.CliSessionState;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
 import org.apache.hadoop.hive.metastore.IMetaStoreClient;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 import org.apache.hadoop.hive.metastore.utils.TestTxnDbUtil;
 import org.apache.hadoop.hive.ql.DriverFactory;
 import org.apache.hadoop.hive.ql.IDriver;
@@ -134,6 +135,7 @@ public class TestStreamingDynamicPartitioning {
 conf.setBoolVar(HiveConf.ConfVars.METASTORE_EXECUTE_SET_UGI, true);
 conf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, true);
 dbFolder.create();
+MetastoreConf.setVar(conf, MetastoreConf.ConfVars.WAREHOUSE, "raw://" + 
dbFolder.newFolder("warehouse"));
 loc1 = dbFolder.newFolder(dbName + ".db").toString();
 
 //1) Start from a clean slate (metastore)


[hive] branch master updated: HIVE-26002: Preparing for 4.0.0-alpha-1 development (Peter Vary reviewed by Stamatis Zampetakis) (#3081)

2022-03-17 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 2524c21  HIVE-26002: Preparing for 4.0.0-alpha-1 development (Peter 
Vary reviewed by Stamatis Zampetakis) (#3081)
2524c21 is described below

commit 2524c2137bc2e70bd32e43a49a15fbfc0fa93159
Author: pvary 
AuthorDate: Thu Mar 17 17:25:12 2022 +0100

HIVE-26002: Preparing for 4.0.0-alpha-1 development (Peter Vary reviewed by 
Stamatis Zampetakis) (#3081)
---
 accumulo-handler/pom.xml   |  2 +-
 beeline/pom.xml|  2 +-
 classification/pom.xml |  2 +-
 cli/pom.xml|  2 +-
 common/pom.xml |  2 +-
 contrib/pom.xml|  2 +-
 druid-handler/pom.xml  |  2 +-
 hbase-handler/pom.xml  |  2 +-
 hcatalog/core/pom.xml  |  2 +-
 hcatalog/hcatalog-pig-adapter/pom.xml  |  4 ++--
 hcatalog/pom.xml   |  4 ++--
 hcatalog/server-extensions/pom.xml |  2 +-
 hcatalog/webhcat/java-client/pom.xml   |  2 +-
 hcatalog/webhcat/svr/pom.xml   |  2 +-
 hplsql/pom.xml |  2 +-
 iceberg/iceberg-catalog/pom.xml|  2 +-
 iceberg/iceberg-handler/pom.xml|  2 +-
 iceberg/iceberg-shading/pom.xml|  2 +-
 iceberg/patched-iceberg-api/pom.xml|  2 +-
 iceberg/patched-iceberg-core/pom.xml   |  2 +-
 iceberg/pom.xml|  4 ++--
 itests/custom-serde/pom.xml|  2 +-
 itests/custom-udfs/pom.xml |  2 +-
 itests/custom-udfs/udf-classloader-udf1/pom.xml|  2 +-
 itests/custom-udfs/udf-classloader-udf2/pom.xml|  2 +-
 itests/custom-udfs/udf-classloader-util/pom.xml|  2 +-
 .../custom-udfs/udf-vectorized-badexample/pom.xml  |  2 +-
 itests/hcatalog-unit/pom.xml   |  2 +-
 itests/hive-blobstore/pom.xml  |  2 +-
 itests/hive-jmh/pom.xml|  2 +-
 itests/hive-minikdc/pom.xml|  2 +-
 itests/hive-unit-hadoop2/pom.xml   |  2 +-
 itests/hive-unit/pom.xml   |  2 +-
 .../InformationSchemaWithPrivilegeTestBase.java|  2 +-
 itests/pom.xml |  2 +-
 itests/qtest-accumulo/pom.xml  |  2 +-
 itests/qtest-druid/pom.xml |  2 +-
 itests/qtest-iceberg/pom.xml   |  2 +-
 itests/qtest-kudu/pom.xml  |  2 +-
 itests/qtest-spark/pom.xml |  2 +-
 itests/qtest/pom.xml   |  2 +-
 itests/test-serde/pom.xml  |  2 +-
 itests/util/pom.xml|  2 +-
 .../hadoop/hive/ql/qoption/QTestSysDbHandler.java  |  2 +-
 jdbc-handler/pom.xml   |  2 +-
 jdbc/pom.xml   |  2 +-
 kafka-handler/pom.xml  |  2 +-
 kryo-registrator/pom.xml   |  2 +-
 kudu-handler/pom.xml   |  2 +-
 llap-client/pom.xml|  2 +-
 llap-common/pom.xml|  2 +-
 llap-ext-client/pom.xml|  2 +-
 llap-server/pom.xml|  2 +-
 llap-tez/pom.xml   |  2 +-
 metastore/pom.xml  |  2 +-
 ...hive.sql => hive-schema-4.0.0-alpha-1.hive.sql} |  4 ++--
 ...sql => upgrade-3.1.0-to-4.0.0-alpha-1.hive.sql} |  8 +++
 metastore/scripts/upgrade/hive/upgrade.order.hive  |  1 +
 packaging/pom.xml  |  2 +-
 parser/pom.xml |  2 +-
 pom.xml|  8 +++
 ql/pom.xml |  2 +-
 .../metastore/txn/TestCompactionTxnHandler.java|  2 +-
 .../hive/ql/txn/compactor/CompactorTest.java   |  2 +-
 .../hadoop/hive/ql/txn/compactor/TestCleaner.java  |  2 +-
 .../ql/txn/compactor/TestCompactionMetrics.java| 26 +++---
 .../hive/ql/txn/compactor/TestInitiator.java   |  8 +++
 .../hadoop/hive/ql/txn/compactor/TestWorker.java   |  4 ++--
 .../clientpositive/materialized_view_parquet.q |  2 ++
 ql/src/test/queries/clientpositive/parquet_stats.q |  2 ++
 .../llap/materialized_view_parquet.q.out   |  2 +-
 .../clientpositive/llap/parquet_stats.q.out|  2 +-
 .../test/r

[hive] branch master updated: HIVE-26040: Fix DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql (Peter Vary reviewed by Marton Bod) (#3112)

2022-03-17 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 1213ad3  HIVE-26040: Fix 
DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql (Peter 
Vary reviewed by Marton Bod) (#3112)
1213ad3 is described below

commit 1213ad3f0ae0e21e7519dc28b8b6d1401cdd1441
Author: pvary 
AuthorDate: Thu Mar 17 12:25:13 2022 +0100

HIVE-26040: Fix 
DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql (Peter 
Vary reviewed by Marton Bod) (#3112)
---
 .../java/org/apache/hadoop/hive/metastore/DirectSqlUpdateStat.java  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlUpdateStat.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlUpdateStat.java
index 6bd1c3d..648fb0d 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlUpdateStat.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/DirectSqlUpdateStat.java
@@ -668,9 +668,9 @@ class DirectSqlUpdateStat {
   // the caller gets a reserved range for CSId not used by any other 
thread.
   boolean insertDone = false;
   while (maxCsId == 0) {
-String query = "SELECT \"NEXT_VAL\" FROM \"SEQUENCE_TABLE\" WHERE 
\"SEQUENCE_NAME\"= "
-+ 
quoteString("org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics")
-+ " FOR UPDATE";
+String query = sqlGenerator.addForUpdateClause("SELECT \"NEXT_VAL\" 
FROM \"SEQUENCE_TABLE\" "
++ "WHERE \"SEQUENCE_NAME\"= "
++ 
quoteString("org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics"));
 LOG.debug("Going to execute query " + query);
 statement = dbConn.createStatement();
 rs = statement.executeQuery(query);


[hive] branch master updated (16cda89 -> 80131a9)

2022-03-16 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 16cda89  HIVE-25994: Analyze table runs into ClassNotFoundException-s 
(Alessandro Solimando reviewed by Laszlo Bodor, Peter Vary) (#3095)
 add 80131a9  HIVE-26025: Remove IMetaStoreClient#listPartitionNames which 
is not used (Zhihua Deng reviewed by Peter Vary) (#3093)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hive/metastore/HiveMetaStoreClient.java | 25 --
 .../hadoop/hive/metastore/IMetaStoreClient.java| 18 
 .../metastore/HiveMetaStoreClientPreCatalog.java   |  8 ---
 3 files changed, 51 deletions(-)


[hive] branch master updated: HIVE-25994: Analyze table runs into ClassNotFoundException-s (Alessandro Solimando reviewed by Laszlo Bodor, Peter Vary) (#3095)

2022-03-16 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 16cda89  HIVE-25994: Analyze table runs into ClassNotFoundException-s 
(Alessandro Solimando reviewed by Laszlo Bodor, Peter Vary) (#3095)
16cda89 is described below

commit 16cda89b2a6655e59e2f0098c84ce431fd98ae1d
Author: Alessandro Solimando 
AuthorDate: Wed Mar 16 14:32:02 2022 +0100

HIVE-25994: Analyze table runs into ClassNotFoundException-s (Alessandro 
Solimando reviewed by Laszlo Bodor, Peter Vary) (#3095)
---
 ql/pom.xml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ql/pom.xml b/ql/pom.xml
index 50d8ec9..8ecc7dc 100644
--- a/ql/pom.xml
+++ b/ql/pom.xml
@@ -1072,6 +1072,7 @@
   
 
   
+  org.antlr:antlr-runtime
   org.apache.hive:hive-common
   org.apache.hive:hive-udf
   org.apache.hive:hive-parser


[hive] branch master updated: Disable flaky test

2022-03-16 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new f81d781  Disable flaky test
f81d781 is described below

commit f81d781c5579bdf3847c3dea3a1f1519dda4bdd6
Author: Peter Vary 
AuthorDate: Wed Mar 16 09:16:36 2022 +0100

Disable flaky test
---
 .../org/apache/hive/service/cli/session/TestSessionManagerMetrics.java   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/service/src/test/org/apache/hive/service/cli/session/TestSessionManagerMetrics.java
 
b/service/src/test/org/apache/hive/service/cli/session/TestSessionManagerMetrics.java
index 9658471..a335665 100644
--- 
a/service/src/test/org/apache/hive/service/cli/session/TestSessionManagerMetrics.java
+++ 
b/service/src/test/org/apache/hive/service/cli/session/TestSessionManagerMetrics.java
@@ -260,6 +260,7 @@ public class TestSessionManagerMetrics {
 
   }
 
+  @org.junit.Ignore("HIVE-26039")
   @Test
   public void testActiveSessionMetrics() throws Exception {
 


[hive] branch master updated: Revert "HIVE-25845: Support ColumnIndexes for Parq files (#3094)"

2022-03-16 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new efddf73  Revert "HIVE-25845: Support ColumnIndexes for Parq files 
(#3094)"
efddf73 is described below

commit efddf739ff1ff2962fc25c7e3edd660e67b8963f
Author: Peter Vary 
AuthorDate: Mon Mar 14 21:48:58 2022 +0100

Revert "HIVE-25845: Support ColumnIndexes for Parq files (#3094)"

This reverts commit 88054ce553604a6d939149faf4d1c3b037706aba.
---
 pom.xml  |  2 +-
 .../hive/ql/io/parquet/ParquetRecordReaderBase.java  |  3 ---
 .../vector/VectorizedParquetRecordReader.java| 20 +---
 3 files changed, 10 insertions(+), 15 deletions(-)

diff --git a/pom.xml b/pom.xml
index db34aa9..804aadd 100644
--- a/pom.xml
+++ b/pom.xml
@@ -189,7 +189,7 @@
 
 4.0.3
 2.8
-1.11.2
+1.11.1
 0.16.0
 1.5.6
 2.5.0
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java
index 1227d52..5235edc 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java
@@ -107,19 +107,16 @@ public class ParquetRecordReaderBase {
   final List splitGroup = new ArrayList();
   final long splitStart = ((FileSplit) oldSplit).getStart();
   final long splitLength = ((FileSplit) oldSplit).getLength();
-  long blockRowCount = 0;
   for (final BlockMetaData block : blocks) {
 final long firstDataPage = 
block.getColumns().get(0).getFirstDataPageOffset();
 if (firstDataPage >= splitStart && firstDataPage < splitStart + 
splitLength) {
   splitGroup.add(block);
-  blockRowCount += block.getRowCount();
 }
   }
   if (splitGroup.isEmpty()) {
 LOG.warn("Skipping split, could not find row group in: " + oldSplit);
 return null;
   }
-  LOG.debug("split group size: {}, row count: {}", splitGroup.size(), 
blockRowCount);
 
   FilterCompat.Filter filter = setFilter(jobConf, 
fileMetaData.getSchema());
   if (filter != null) {
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
index 26fb96c..d17ddd5 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java
@@ -157,13 +157,12 @@ public class VectorizedParquetRecordReader extends 
ParquetRecordReaderBase
   jobConf = conf;
   isReadCacheOnly = HiveConf.getBoolVar(jobConf, 
ConfVars.LLAP_IO_CACHE_ONLY);
   rbCtx = Utilities.getVectorizedRowBatchCtx(jobConf);
-  ParquetInputSplit inputSplit = getSplit(oldInputSplit, jobConf);
-  // use jobConf consistently throughout, as getSplit clones it & adds 
filter to it.
+  ParquetInputSplit inputSplit = getSplit(oldInputSplit, conf);
   if (inputSplit != null) {
-initialize(inputSplit, jobConf);
+initialize(inputSplit, conf);
   }
   FileSplit fileSplit = (FileSplit) oldInputSplit;
-  initPartitionValues(fileSplit, jobConf);
+  initPartitionValues(fileSplit, conf);
   bucketIdentifier = BucketIdentifier.from(conf, fileSplit.getPath());
 } catch (Throwable e) {
   LOG.error("Failed to create the vectorized reader due to exception " + 
e);
@@ -271,6 +270,9 @@ public class VectorizedParquetRecordReader extends 
ParquetRecordReaderBase
   }
 }
 
+for (BlockMetaData block : blocks) {
+  this.totalRowCount += block.getRowCount();
+}
 this.fileSchema = footer.getFileMetaData().getSchema();
 this.writerTimezone = DataWritableReadSupport
 .getWriterTimeZoneId(footer.getFileMetaData().getKeyValueMetaData());
@@ -279,12 +281,9 @@ public class VectorizedParquetRecordReader extends 
ParquetRecordReaderBase
 requestedSchema = DataWritableReadSupport
   .getRequestedSchema(indexAccess, columnNamesList, columnTypesList, 
fileSchema, configuration);
 
-//TODO: For data cache this needs to be fixed and passed to reader.
-//Path path = wrapPathForCache(file, cacheKey, configuration, blocks, 
cacheTag);
+Path path = wrapPathForCache(file, cacheKey, configuration, blocks, 
cacheTag);
 this.reader = new ParquetFileReader(
-  configuration, footer.getFileMetaData(), file, blocks, 
requestedSchema.getColumns());
-this.totalRowCount = this.reader.getFilteredRecordCount();
-LOG.debug("totalRowCount: {}"

[hive] branch master updated (1139c4b -> e87967a)

2022-03-10 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 1139c4b  HIVE-25688: Addendum (Denys Kuzmenko, reviewed by Karen 
Coppage)
 add e87967a  HIVE-25935: Cleanup IMetaStoreClient#getPartitionsByNames 
APIs (Peter Vary reviewed by Zhihua Deng) (#3072)

No new revisions were added by this update.

Summary of changes:
 .../metastore/TestHiveMetastoreTransformer.java|  20 ++--
 .../org/apache/hadoop/hive/ql/metadata/Hive.java   |  21 ++--
 .../ql/metadata/SessionHiveMetaStoreClient.java|  17 ++--
 .../ql/txn/compactor/RemoteCompactorThread.java|   6 +-
 ...nHiveMetastoreClientGetPartitionsTempTable.java |   6 --
 .../hadoop/hive/metastore/HiveMetaStoreClient.java |  92 +
 .../hadoop/hive/metastore/IMetaStoreClient.java| 110 ++---
 .../hive/metastore/utils/MetaStoreUtils.java   |  24 -
 .../hadoop/hive/metastore/PartitionIterable.java   |   9 +-
 .../metastore/HiveMetaStoreClientPreCatalog.java   |  49 +
 .../apache/hadoop/hive/metastore/TestStats.java|  26 ++---
 .../hive/metastore/client/TestGetPartitions.java   |   2 +-
 12 files changed, 99 insertions(+), 283 deletions(-)


[hive] branch master updated: HIVE-25894: Table migration to Iceberg doesn't remove HMS partitions (Peter Vary reviewed by Marton Bod and Laszlo Pinter) (#3061)

2022-03-04 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 274a21d  HIVE-25894: Table migration to Iceberg doesn't remove HMS 
partitions (Peter Vary reviewed by Marton Bod and Laszlo Pinter) (#3061)
274a21d is described below

commit 274a21da875f9a37e717e282580d2c2dfc7174bb
Author: pvary 
AuthorDate: Fri Mar 4 14:37:47 2022 +0100

HIVE-25894: Table migration to Iceberg doesn't remove HMS partitions (Peter 
Vary reviewed by Marton Bod and Laszlo Pinter) (#3061)
---
 .../org/apache/iceberg/hive/TestHiveMetastore.java |  6 
 .../iceberg/mr/hive/HiveIcebergMetaHook.java   | 42 --
 .../iceberg/mr/hive/TestHiveIcebergMigration.java  |  7 
 .../hive/TestHiveIcebergStorageHandlerNoScan.java  |  2 +-
 .../apache/hadoop/hive/metastore/HiveMetaHook.java |  4 +--
 .../hadoop/hive/metastore/HiveMetaStoreClient.java |  6 ++--
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  4 +--
 7 files changed, 51 insertions(+), 20 deletions(-)

diff --git 
a/iceberg/iceberg-catalog/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
 
b/iceberg/iceberg-catalog/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
index c359277..0d86d6f 100644
--- 
a/iceberg/iceberg-catalog/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
+++ 
b/iceberg/iceberg-catalog/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
@@ -29,11 +29,13 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.HMSHandler;
 import org.apache.hadoop.hive.metastore.IHMSHandler;
+import org.apache.hadoop.hive.metastore.IMetaStoreClient;
 import org.apache.hadoop.hive.metastore.RetryingHMSHandler;
 import org.apache.hadoop.hive.metastore.TSetIpAddressProcessor;
 import org.apache.hadoop.hive.metastore.api.Table;
 import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 import org.apache.hadoop.hive.metastore.utils.TestTxnDbUtil;
+import org.apache.iceberg.ClientPool;
 import org.apache.iceberg.catalog.TableIdentifier;
 import org.apache.iceberg.common.DynConstructors;
 import org.apache.iceberg.common.DynMethods;
@@ -201,6 +203,10 @@ public class TestHiveMetastore {
 return getTable(identifier.namespace().toString(), identifier.name());
   }
 
+  public  R run(ClientPool.Action action) 
throws InterruptedException, TException {
+return clientPool.run(action, false);
+  }
+
   private TServer newThriftServer(TServerSocket socket, int poolSize, HiveConf 
conf) throws Exception {
 HiveConf serverConf = new HiveConf(conf);
 serverConf.set(HiveConf.ConfVars.METASTORECONNECTURLKEY.varname, 
"jdbc:derby:" + getDerbyPath() + ";create=true");
diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergMetaHook.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergMetaHook.java
index 21b45d7..cb036dd 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergMetaHook.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergMetaHook.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hive.common.TableName;
 import org.apache.hadoop.hive.metastore.HiveMetaHook;
+import org.apache.hadoop.hive.metastore.PartitionDropOptions;
 import org.apache.hadoop.hive.metastore.api.EnvironmentContext;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.apache.hadoop.hive.metastore.api.MetaException;
@@ -44,10 +45,14 @@ import 
org.apache.hadoop.hive.metastore.partition.spec.PartitionSpecProxy;
 import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
 import org.apache.hadoop.hive.ql.ddl.table.AlterTableType;
 import org.apache.hadoop.hive.ql.io.AcidUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
 import org.apache.hadoop.hive.ql.parse.PartitionTransform;
 import org.apache.hadoop.hive.ql.parse.PartitionTransformSpec;
+import org.apache.hadoop.hive.ql.session.SessionState;
 import org.apache.hadoop.hive.ql.session.SessionStateUtil;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.iceberg.BaseMetastoreTableOperations;
 import org.apache.iceberg.BaseTable;
 import org.apache.iceberg.CatalogUtil;
@@ -84,6 +89,7 @@ import 
org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
 import org.apache.iceberg.relocated.com.google.common.collect.Lists;
 import org.apache.iceberg.types.Type;
 import org.apache.iceberg.util.Pair;
+import org.apache.thrift.TException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -108,6 +114,9 @@ public class HiveIceber

[hive] branch master updated: HIVE-25997: Fix release source packaging (Zoltan Haindrich and Peter Vary, reviewed by Stamatis Zampetakis) (#3067)

2022-03-03 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 95c6155  HIVE-25997: Fix release source packaging (Zoltan Haindrich 
and Peter Vary, reviewed by Stamatis Zampetakis) (#3067)
95c6155 is described below

commit 95c6155677b5a288b6bc571b11caf2c8eb80825f
Author: pvary 
AuthorDate: Thu Mar 3 15:48:43 2022 +0100

HIVE-25997: Fix release source packaging (Zoltan Haindrich and Peter Vary, 
reviewed by Stamatis Zampetakis) (#3067)
---
 Jenkinsfile |  7 +++
 packaging/src/main/assembly/src.xml | 14 ++
 2 files changed, 21 insertions(+)

diff --git a/Jenkinsfile b/Jenkinsfile
index b079d51..70d5b91 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -304,6 +304,13 @@ dev-support/nightly
 '''
 buildHive("install -Dtest=noMatches -Pdist -pl packaging -am")
 }
+stage('Verify') {
+sh '''#!/bin/bash
+set -e
+tar -xzf packaging/target/apache-hive-*-nightly-*-src.tar.gz
+'''
+buildHive("install -Dtest=noMatches -Pdist,iceberg -f 
apache-hive-*-nightly-*/pom.xml")
+}
   }
   }
   for (int i = 0; i < splits.size(); i++) {
diff --git a/packaging/src/main/assembly/src.xml 
b/packaging/src/main/assembly/src.xml
index 625fcf2..8fc5bdc 100644
--- a/packaging/src/main/assembly/src.xml
+++ b/packaging/src/main/assembly/src.xml
@@ -100,12 +100,26 @@
 storage-api/**/*
 standalone-metastore/metastore-common/**/*
 standalone-metastore/metastore-server/**/*
+standalone-metastore/metastore-tools/**/*
+standalone-metastore/src/assembly/src.xml
+standalone-metastore/pom.xml
 streaming/**/*
 testutils/**/*
 upgrade-acid/**/*
 vector-code-gen/**/*
 kryo-registrator/**/*
 kudu-handler/**/*
+parser/**/*
+udf/**/*
+iceberg/iceberg-catalog/**/*
+iceberg/iceberg-handler/**/*
+iceberg/iceberg-shading/**/*
+iceberg/patched-iceberg-api/**/*
+iceberg/patched-iceberg-core/**/*
+iceberg/patched-iceberg-data/**/*
+iceberg/patched-iceberg-orc/**/*
+iceberg/checkstyle/**/*
+iceberg/pom.xml
   
   /
 


[hive] branch master updated: HIVE-25665: Checkstyle LGPL files must not be in the release sources/binaries (Peter Vary reviewed by Zoltan Haindrich) (#3063)

2022-03-01 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 4aa4c89  HIVE-25665: Checkstyle LGPL files must not be in the release 
sources/binaries (Peter Vary reviewed by Zoltan Haindrich) (#3063)
4aa4c89 is described below

commit 4aa4c892f35185b1e2159f72d43d9a55abb34c18
Author: pvary 
AuthorDate: Tue Mar 1 23:16:17 2022 +0100

HIVE-25665: Checkstyle LGPL files must not be in the release 
sources/binaries (Peter Vary reviewed by Zoltan Haindrich) (#3063)
---
 checkstyle/checkstyle-noframes-sorted.xsl  | 195 -
 .../checkstyle/checkstyle-noframes-sorted.xsl  | 195 -
 .../checkstyle/checkstyle-noframes-sorted.xsl  | 195 -
 3 files changed, 585 deletions(-)

diff --git a/checkstyle/checkstyle-noframes-sorted.xsl 
b/checkstyle/checkstyle-noframes-sorted.xsl
deleted file mode 100644
index 9c0ac30..000
--- a/checkstyle/checkstyle-noframes-sorted.xsl
+++ /dev/null
@@ -1,195 +0,0 @@
-http://www.w3.org/1999/XSL/Transform; 
version="1.0">
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-   
-   
-   
-.bannercell {
-  border: 0px;
-  padding: 0px;
-}
-body {
-  margin-left: 10;
-  margin-right: 10;
-  font:normal 80% arial,helvetica,sanserif;
-  background-color:#FF;
-  color:#00;
-}
-.a td {
-  background: #efefef;
-}
-.b td {
-  background: #fff;
-}
-th, td {
-  text-align: left;
-  vertical-align: top;
-}
-th {
-  font-weight:bold;
-  background: #ccc;
-  color: black;
-}
-table, th, td {
-  font-size:100%;
-  border: none
-}
-table.log tr td, tr th {
-
-}
-h2 {
-  font-weight:bold;
-  font-size:140%;
-  margin-bottom: 5;
-}
-h3 {
-  font-size:100%;
-  font-weight:bold;
-  background: #525D76;
-  color: white;
-  text-decoration: none;
-  padding: 5px;
-  margin-right: 2px;
-  margin-left: 2px;
-  margin-bottom: 0;
-}
-   
-   
-   
-   
-  
-  
-  
-
-  
-
-   CheckStyle Audit
-   
-   
-   Designed for use with CheckStyle and Ant.
-   
-  
-   
-
-   
-   
-   
-
-   
-   
-   
-
-   
-
-
-   
-
-
-   
-   
-
-
-
-
-   
-   Files
-   
-  
-Name
-Errors
-  
-  
-
-   
-   
-  
-   
-   
-   
-   
-   
-   
-
-
-   
-
-File 
-
-
-   
- Error Description
- Line
-  
-
-  
-   
-
- 
- 
-   
-   
-
-Back to top
-   
-
-
-   
-   Summary
-
-   
-   
-   
-   Files
-   Errors
-   
-   
- 
-   
-   
-   
-   
-   
-
-  
-
-  a
-  b
-
-  
-
-
-
diff --git a/standalone-metastore/checkstyle/checkstyle-noframes-sorted.xsl 
b/standalone-metastore/checkstyle/checkstyle-noframes-sorted.xsl
deleted file mode 100644
index 9c0ac30..000
--- a/standalone-metastore/checkstyle/checkstyle-noframes-sorted.xsl
+++ /dev/null
@@ -1,195 +0,0 @@
-http://www.w3.org/1999/XSL/Transform; 
version="1.0">
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-   
-   
-   
-.bannercell {
-  border: 0px;
-  padding: 0px;
-}
-body {
-  margin-left: 10;
-  margin-right: 10;
-  font:normal 80% arial,helvetica,sanserif;
-  background-color:#FF;
-  color:#00;
-}
-.a td {
-  background: #efefef;
-}
-.b td {
-  background: #fff;
-}
-th, td {
-  text-align: left;
-  vertical-align: top;
-}
-th {
-  font-weight:bold;
-  background: #ccc;
-  color: black;
-}
-table, th, td {
-  font-size:100%;
-  border: none
-}
-table.log tr td, tr th {
-
-}
-h2 {
-  font-weight:bold;
-  font-size:140%;
-  margin-bottom: 5;
-}
-h3 {
-  font-size:100%;
-  font-weight:bold;
-  background: #525D76;
-  color: white;
-  text-decoration:

[hive] branch master updated: HIVE-25961: Altering partition specification parameters for Iceberg tables are not working (Peter Vary reviewed by Laszlo Pinter) (#3035)

2022-02-16 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 970c41c  HIVE-25961: Altering partition specification parameters for 
Iceberg tables are not working (Peter Vary reviewed by Laszlo Pinter) (#3035)
970c41c is described below

commit 970c41c302719cdca9bb9d004c6beca19451188b
Author: pvary 
AuthorDate: Thu Feb 17 07:28:08 2022 +0100

HIVE-25961: Altering partition specification parameters for Iceberg tables 
are not working (Peter Vary reviewed by Laszlo Pinter) (#3035)
---
 .../apache/iceberg/mr/hive/IcebergTableUtil.java   |  73 ++
 .../hive/TestHiveIcebergStorageHandlerNoScan.java  | 111 +
 .../hadoop/hive/ql/parse/PartitionTransform.java   |   5 +-
 3 files changed, 143 insertions(+), 46 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/IcebergTableUtil.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/IcebergTableUtil.java
index 9a1f316..63ddfc3 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/IcebergTableUtil.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/IcebergTableUtil.java
@@ -21,14 +21,11 @@ package org.apache.iceberg.mr.hive;
 
 import java.util.List;
 import java.util.Properties;
-import java.util.stream.Collectors;
-import java.util.stream.IntStream;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
 import org.apache.hadoop.hive.ql.QueryState;
 import org.apache.hadoop.hive.ql.parse.PartitionTransformSpec;
 import org.apache.hadoop.hive.ql.session.SessionStateUtil;
-import org.apache.iceberg.PartitionField;
 import org.apache.iceberg.PartitionSpec;
 import org.apache.iceberg.Schema;
 import org.apache.iceberg.Table;
@@ -146,51 +143,39 @@ public class IcebergTableUtil {
   return;
 }
 
-List newPartitionNames =
-
newPartitionSpec.fields().stream().map(PartitionField::name).collect(Collectors.toList());
-List currentPartitionNames = 
table.spec().fields().stream().map(PartitionField::name)
-.collect(Collectors.toList());
-List intersectingPartitionNames =
-
currentPartitionNames.stream().filter(newPartitionNames::contains).collect(Collectors.toList());
+// delete every field from the old partition spec
+UpdatePartitionSpec updatePartitionSpec = 
table.updateSpec().caseSensitive(false);
+table.spec().fields().forEach(field -> 
updatePartitionSpec.removeField(field.name()));
 
-// delete those partitions which are not present among the new partion spec
-UpdatePartitionSpec updatePartitionSpec = table.updateSpec();
-currentPartitionNames.stream().filter(p -> 
!intersectingPartitionNames.contains(p))
-.forEach(updatePartitionSpec::removeField);
-updatePartitionSpec.apply();
-
-// add new partitions which are not yet present
 List partitionTransformSpecList = SessionStateUtil
 .getResource(configuration, 
hive_metastoreConstants.PARTITION_TRANSFORM_SPEC)
 .map(o -> (List) o).orElseGet(() -> null);
-IntStream.range(0, partitionTransformSpecList.size())
-.filter(i -> 
!intersectingPartitionNames.contains(newPartitionSpec.fields().get(i).name()))
-.forEach(i -> {
-  PartitionTransformSpec spec = partitionTransformSpecList.get(i);
-  switch (spec.getTransformType()) {
-case IDENTITY:
-  updatePartitionSpec.addField(spec.getColumnName());
-  break;
-case YEAR:
-  
updatePartitionSpec.addField(Expressions.year(spec.getColumnName()));
-  break;
-case MONTH:
-  
updatePartitionSpec.addField(Expressions.month(spec.getColumnName()));
-  break;
-case DAY:
-  
updatePartitionSpec.addField(Expressions.day(spec.getColumnName()));
-  break;
-case HOUR:
-  
updatePartitionSpec.addField(Expressions.hour(spec.getColumnName()));
-  break;
-case TRUNCATE:
-  
updatePartitionSpec.addField(Expressions.truncate(spec.getColumnName(), 
spec.getTransformParam().get()));
-  break;
-case BUCKET:
-  
updatePartitionSpec.addField(Expressions.bucket(spec.getColumnName(), 
spec.getTransformParam().get()));
-  break;
-  }
-});
+
+partitionTransformSpecList.forEach(spec -> {
+  switch (spec.getTransformType()) {
+case IDENTITY:
+  updatePartitionSpec.addField(spec.getColumnName());
+  break;
+case YEAR:
+  updatePartitionSpec.addField(Expressions.year(spec.getColumnName()));
+  break;
+case MONTH:
+  
updateP

[hive] branch master updated: HIVE-25912: Drop external table throw NPE if the location set to ROOT (Fachuan Bai reviewed by Peter Vary) (#3009)

2022-02-13 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 108f151  HIVE-25912: Drop external table throw NPE if the location set 
to ROOT (Fachuan Bai reviewed by Peter Vary) (#3009)
108f151 is described below

commit 108f1513dd353a019f9182dee0a0eba35b0d6f9f
Author: 白发川(惊帆) 
AuthorDate: Mon Feb 14 14:48:47 2022 +0800

HIVE-25912: Drop external table throw NPE if the location set to ROOT 
(Fachuan Bai reviewed by Peter Vary) (#3009)

Closes #3009
---
 .../hive/metastore/utils/MetaStoreUtils.java   |  8 
 .../apache/hadoop/hive/metastore/HMSHandler.java   |  5 +
 .../client/TestTablesCreateDropAlterTruncate.java  | 21 +
 .../metastore/utils/TestMetaStoreServerUtils.java  | 22 ++
 4 files changed, 56 insertions(+)

diff --git 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java
 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java
index 93c1000..07be990 100644
--- 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java
+++ 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java
@@ -186,6 +186,14 @@ public class MetaStoreUtils {
 return colNames;
   }
 
+  /*
+   * Check the table storage location must not be root path.
+   */
+  public static boolean validateTblStorage(StorageDescriptor sd) {
+return !(StringUtils.isNotBlank(sd.getLocation())
+&& new Path(sd.getLocation()).getParent() == null);
+  }
+
   /**
* validateName
*
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
index bd74b3e..2d57e58 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
@@ -2282,6 +2282,11 @@ public class HMSHandler extends FacebookBase implements 
IHMSHandler {
   + " is not a valid object name");
 }
 
+if (!MetaStoreUtils.validateTblStorage(tbl.getSd())) {
+  throw new InvalidObjectException(tbl.getTableName()
+  + " location must not be root path");
+}
+
 if (!tbl.isSetCatName()) {
   tbl.setCatName(getDefaultCatalog(conf));
 }
diff --git 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/client/TestTablesCreateDropAlterTruncate.java
 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/client/TestTablesCreateDropAlterTruncate.java
index a6ea8cd..b92790a 100644
--- 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/client/TestTablesCreateDropAlterTruncate.java
+++ 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/client/TestTablesCreateDropAlterTruncate.java
@@ -78,6 +78,7 @@ import java.util.Set;
 import static 
org.apache.hadoop.hive.metastore.TestHiveMetaStore.createSourceTable;
 import static org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_CATALOG_NAME;
 import static org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
+import static org.junit.Assert.assertThrows;
 
 /**
  * Test class for IMetaStoreClient API. Testing the Table related functions 
for metadata
@@ -357,6 +358,26 @@ public class TestTablesCreateDropAlterTruncate extends 
MetaStoreClientTest {
 createdTable.getSd().getLocation());
   }
 
+
+  @Test
+  public void testCreateTableRooPathLocationInSpecificDatabase() {
+Table table = new Table();
+StorageDescriptor sd = new StorageDescriptor();
+List cols = new ArrayList<>();
+sd.setLocation("hdfs://localhost:8020");
+table.setDbName(DEFAULT_DATABASE);
+table.setTableName("test_table_2_with_root_path");
+cols.add(new FieldSchema("column_name", "int", null));
+sd.setCols(cols);
+sd.setSerdeInfo(new SerDeInfo());
+table.setSd(sd);
+
+Exception exception = assertThrows(InvalidObjectException.class, () -> 
client.createTable(table));
+Assert.assertEquals("Storage descriptor location",
+table.getTableName() + " location must not be root path",
+exception.getMessage());
+  }
+
   @Test
   public void testCreateTableDefaultValuesView() throws Exception {
 Table table = new Table();
diff --git 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/utils

[hive] branch master updated: HIVE-25841: Improve performance of deleteColumnStatsState (Peter Vary reviewed by Zoltan Haindrich) (#2914)

2022-01-05 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 3c9ea5d  HIVE-25841: Improve performance of deleteColumnStatsState 
(Peter Vary reviewed by Zoltan Haindrich) (#2914)
3c9ea5d is described below

commit 3c9ea5d8d6ecc0d3749e22b28bfb7690dd3f1be1
Author: pvary 
AuthorDate: Wed Jan 5 13:49:21 2022 +0100

HIVE-25841: Improve performance of deleteColumnStatsState (Peter Vary 
reviewed by Zoltan Haindrich) (#2914)
---
 .../hadoop/hive/metastore/MetaStoreDirectSql.java  | 24 +-
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
index b200608..d24128c 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
@@ -2952,14 +2952,28 @@ class MetaStoreDirectSql {
   }
 
   public void deleteColumnStatsState(long tbl_id) throws MetaException {
-// @formatter:off
-String queryText = ""
-+ "delete from " + PARTITION_PARAMS + " "
-+ " where "
+String queryText;
+switch (dbType.dbType) {
+  case MYSQL:
+// @formatter:off
+queryText = ""
++ "delete pp from " + PARTITION_PARAMS + " pp, " + PARTITIONS + " 
p"
++ " where"
++ "   p.\"PART_ID\" = pp.\"PART_ID\" AND"
++ "   p.\"TBL_ID\" = " + tbl_id
++ "  and \"PARAM_KEY\" = '"+StatsSetupConst.COLUMN_STATS_ACCURATE 
+ "'";
+// @formatter:on
+break;
+  default:
+// @formatter:off
+queryText = ""
++ "delete from " + PARTITION_PARAMS
++ " where"
 + "   \"PART_ID\" in (select p.\"PART_ID\"  from " + PARTITIONS + 
" p where"
 + "   p.\"TBL_ID\" =  " + tbl_id + ")"
 + "  and \"PARAM_KEY\" = '"+StatsSetupConst.COLUMN_STATS_ACCURATE 
+ "'";
-// @formatter:on
+// @formatter:on
+}
 
 try {
   executeNoResult(queryText);


[hive] branch master updated (3a12055 -> bb26fac)

2021-12-21 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 3a12055  HIVE-25786: Auto-close browser window/tab after successful 
auth with SSO (Saihemanth Gantasala via Naveen Gangam)
 add bb26fac  HIVE-25792: Recompile the query if CBO has failed (Peter Vary 
reviewed by Stamatis Zampetakis and Zoltan Haindrich)(#2865)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |   7 +-
 .../TestReExecuteKilledTezAMQueryPlugin.java   |   2 +-
 .../java/org/apache/hadoop/hive/ql/Compiler.java   |  19 +-
 ql/src/java/org/apache/hadoop/hive/ql/Driver.java  |   9 +-
 .../org/apache/hadoop/hive/ql/DriverFactory.java   |   9 +-
 .../java/org/apache/hadoop/hive/ql/Executor.java   |   6 +-
 .../java/org/apache/hadoop/hive/ql/HookRunner.java |  27 ++-
 .../apache/hadoop/hive/ql/hooks/HookContext.java   |   6 +-
 .../hadoop/hive/ql/hooks/PrivateHookContext.java   |  12 ++
 .../hadoop/hive/ql/parse/CBOFallbackStrategy.java  |  24 ++-
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  26 +--
 .../hive/ql/parse/ExplainSemanticAnalyzer.java |   7 +-
 .../hadoop/hive/ql/reexec/IReExecutionPlugin.java  |  62 ++-
 .../hadoop/hive/ql/reexec/ReCompileException.java  |  18 +-
 .../hive/ql/reexec/ReCompileWithoutCBOPlugin.java  |  85 +
 .../apache/hadoop/hive/ql/reexec/ReExecDriver.java | 196 -
 .../hive/ql/reexec/ReExecuteLostAMQueryPlugin.java |  14 +-
 .../hive/ql/reexec/ReExecutionDagSubmitPlugin.java |  14 +-
 .../hive/ql/reexec/ReExecutionOverlayPlugin.java   |  11 +-
 .../hadoop/hive/ql/reexec/ReOptimizePlugin.java|   2 +-
 .../cbo_fallback_wrong_configuration_exception.q   |   6 +
 ql/src/test/queries/clientpositive/retry_failure.q |   2 +-
 .../queries/clientpositive/retry_failure_oom.q |   2 +-
 .../queries/clientpositive/retry_failure_reorder.q |   2 +-
 .../clientpositive/retry_failure_stat_changes.q|   2 +-
 .../queries/clientpositive/runtime_stats_hs2.q |   2 +-
 .../queries/clientpositive/vector_retry_failure.q  |   2 +-
 ...bo_fallback_wrong_configuration_exception.q.out |   1 +
 28 files changed, 374 insertions(+), 201 deletions(-)
 copy 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/security/TestZookeeperTokenStorePlain.java
 => ql/src/java/org/apache/hadoop/hive/ql/reexec/ReCompileException.java (64%)
 create mode 100644 
ql/src/java/org/apache/hadoop/hive/ql/reexec/ReCompileWithoutCBOPlugin.java
 create mode 100644 
ql/src/test/queries/clientnegative/cbo_fallback_wrong_configuration_exception.q
 create mode 100644 
ql/src/test/results/clientnegative/cbo_fallback_wrong_configuration_exception.q.out


[hive] branch master updated: HIVE-25764: Add reason for the compaction failure message (Peter Vary reviewed by Denys Kuzmenko) (#2836)

2021-12-12 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new bc9cb14  HIVE-25764: Add reason for the compaction failure message 
(Peter Vary reviewed by Denys Kuzmenko) (#2836)
bc9cb14 is described below

commit bc9cb14cf6e38b18022ccda0dd1f812d93e383a9
Author: pvary 
AuthorDate: Sun Dec 12 21:27:32 2021 +0100

HIVE-25764: Add reason for the compaction failure message (Peter Vary 
reviewed by Denys Kuzmenko) (#2836)
---
 ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Worker.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Worker.java 
b/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Worker.java
index bcd4833..a90e307 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Worker.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Worker.java
@@ -722,7 +722,8 @@ public class Worker extends RemoteCompactorThread 
implements MetaStoreThread {
   LockRequest lockRequest = createLockRequest(ci, txnId);
   LockResponse res = msc.lock(lockRequest);
   if (res.getState() != LockState.ACQUIRED) {
-throw new TException("Unable to acquire lock(S) on " + 
ci.getFullPartitionName());
+throw new TException("Unable to acquire lock(s) on {" + 
ci.getFullPartitionName()
++ "}, status {" + res.getState() + "}, reason {" + 
res.getErrorMessage() + "}");
   }
   lockId = res.getLockid();
 


[hive] branch master updated: HIVE-25773: Column descriptors might not be deleted via direct sql (Yu-Wen Lai reviewed by Peter Vary) (#2843)

2021-12-10 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 2bdf0cc  HIVE-25773: Column descriptors might not be deleted via 
direct sql (Yu-Wen Lai reviewed by Peter Vary) (#2843)
2bdf0cc is described below

commit 2bdf0ccb94f9555dd0c06131a7fb5defcf8010ed
Author: hsnusonic 
AuthorDate: Fri Dec 10 06:16:04 2021 -0800

HIVE-25773: Column descriptors might not be deleted via direct sql (Yu-Wen 
Lai reviewed by Peter Vary) (#2843)
---
 .../hadoop/hive/metastore/MetaStoreDirectSql.java  | 27 +++
 .../hadoop/hive/metastore/TestObjectStore.java | 38 --
 2 files changed, 49 insertions(+), 16 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
index d28e630..b200608 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
@@ -29,12 +29,15 @@ import java.sql.Statement;
 import java.text.ParseException;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.Collection;
 import java.util.Collections;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.Iterator;
 import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
+import java.util.Set;
 import java.util.TreeMap;
 import java.util.stream.Collectors;
 
@@ -151,7 +154,7 @@ class MetaStoreDirectSql {
* @return The concatenated list
* @throws MetaException If the list contains wrong data
*/
-  public static  String getIdListForIn(List objectIds) throws 
MetaException {
+  public static  String getIdListForIn(Collection objectIds) throws 
MetaException {
 return objectIds.stream()
.map(i -> i.toString())
.collect(Collectors.joining(","));
@@ -2622,7 +2625,7 @@ class MetaStoreDirectSql {
 + "WHERE " + PARTITIONS + ".\"PART_ID\" in (" + partitionIds + ")";
 
 List sdIdList = new ArrayList<>(partitionIdList.size());
-List columnDescriptorIdList = new ArrayList<>(1);
+List columnDescriptorIdList = new ArrayList<>(1);
 List serdeIdList = new ArrayList<>(partitionIdList.size());
 try (QueryWrapper query = new 
QueryWrapper(pm.newQuery("javax.jdo.query.SQL", queryText))) {
   List sqlResult = MetastoreDirectSqlUtils
@@ -2808,7 +2811,7 @@ class MetaStoreDirectSql {
* @throws MetaException If there is an SQL exception during the execution 
it converted to
* MetaException
*/
-  private void dropDanglingColumnDescriptors(List 
columnDescriptorIdList)
+  private void dropDanglingColumnDescriptors(List columnDescriptorIdList)
   throws MetaException {
 if (columnDescriptorIdList.isEmpty()) {
   return;
@@ -2818,26 +2821,24 @@ class MetaStoreDirectSql {
 
 // Drop column descriptor, if no relation left
 queryText =
-"SELECT " + SDS + ".\"CD_ID\", count(1) "
+"SELECT " + SDS + ".\"CD_ID\" "
 + "from " + SDS + " "
 + "WHERE " + SDS + ".\"CD_ID\" in (" + colIds + ") "
 + "GROUP BY " + SDS + ".\"CD_ID\"";
-List danglingColumnDescriptorIdList = new 
ArrayList<>(columnDescriptorIdList.size());
+Set danglingColumnDescriptorIdSet = new 
HashSet<>(columnDescriptorIdList);
 try (QueryWrapper query = new 
QueryWrapper(pm.newQuery("javax.jdo.query.SQL", queryText))) {
-  List sqlResult = MetastoreDirectSqlUtils
-  .ensureList(executeWithArray(query, null, queryText));
+  List sqlResult = executeWithArray(query, null, queryText);
 
   if (!sqlResult.isEmpty()) {
-for (Object[] fields : sqlResult) {
-  if (MetastoreDirectSqlUtils.extractSqlInt(fields[1]) == 0) {
-
danglingColumnDescriptorIdList.add(MetastoreDirectSqlUtils.extractSqlLong(fields[0]));
-  }
+for (Long cdId : sqlResult) {
+  // the returned CD is not dangling, so remove it from the list
+  danglingColumnDescriptorIdSet.remove(cdId);
 }
   }
 }
-if (!danglingColumnDescriptorIdList.isEmpty()) {
+if (!danglingColumnDescriptorIdSet.isEmpty()) {
   try {
-String danglingCDIds = getIdListForIn(danglingColumnDescriptorIdList);
+String danglingCDIds = getIdListForIn(danglingColumnDescriptorIdSet);
 
 // Drop the columns_v2
 

[hive] branch master updated: HIVE-25774: Add ASF license for newly created files in standalone-metastore (Zhihua Deng reviewed by Peter Vary) (#2844)

2021-12-07 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new d50cf2b  HIVE-25774: Add ASF license for newly created files in 
standalone-metastore (Zhihua Deng reviewed by Peter Vary) (#2844)
d50cf2b is described below

commit d50cf2b126804c3ddad73ab64262dbedbafd779f
Author: dengzh 
AuthorDate: Tue Dec 7 21:18:09 2021 +0800

HIVE-25774: Add ASF license for newly created files in standalone-metastore 
(Zhihua Deng reviewed by Peter Vary) (#2844)
---
 standalone-metastore/metastore-server/pom.xml  |  1 +
 .../hadoop/hive/metastore/cache/TableCacheObjects.java | 18 ++
 .../dataconnector/AbstractDataConnectorProvider.java   | 18 ++
 .../dataconnector/DataConnectorProviderFactory.java| 18 ++
 .../dataconnector/IDataConnectorProvider.java  | 18 ++
 .../dataconnector/JDBCConnectorProviderFactory.java| 18 ++
 .../jdbc/AbstractJDBCConnectorProvider.java| 18 ++
 .../dataconnector/jdbc/DerbySQLConnectorProvider.java  | 18 ++
 .../dataconnector/jdbc/MySQLConnectorProvider.java | 18 ++
 .../jdbc/PostgreSQLConnectorProvider.java  | 18 ++
 .../hive/metastore/events/DropDataConnectorEvent.java  | 18 ++
 .../metastore/events/PreCreateDataConnectorEvent.java  | 18 ++
 .../messaging/AddDefaultConstraintMessage.java | 18 ++
 .../hadoop/hive/metastore/dbinstall/rules/Mariadb.java | 18 ++
 .../hadoop/hive/metastore/tools/ACIDBenchmarks.java| 18 ++
 .../hadoop/hive/metastore/tools/BenchmarkUtils.java| 18 ++
 .../apache/hadoop/hive/metastore/tools/HMSConfig.java  | 18 ++
 17 files changed, 289 insertions(+)

diff --git a/standalone-metastore/metastore-server/pom.xml 
b/standalone-metastore/metastore-server/pom.xml
index 4b17c2b..45906e0 100644
--- a/standalone-metastore/metastore-server/pom.xml
+++ b/standalone-metastore/metastore-server/pom.xml
@@ -658,6 +658,7 @@
 **/patchprocess/**
 **/metastore_db/**
 **/test/resources/**/*.ldif
+**/test/resources/sql/**
   
 
   
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/TableCacheObjects.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/TableCacheObjects.java
index 943c8ce..87f15ee 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/TableCacheObjects.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/TableCacheObjects.java
@@ -1,3 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.hadoop.hive.metastore.cache;
 
 import org.apache.hadoop.hive.metastore.api.AggrStats;
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
index 9a87eb0..295643a 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
@@ -1,3 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of t

[hive] branch master updated (63bb7b9 -> b5fafcd)

2021-11-29 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 63bb7b9  HIVE-25561: Killed task should not commit file. (#2674) 
(zhengchenyu reviewed by Zoltan Haindrich)
 add b5fafcd  HIVE-25736: Close ORC readers (Peter Vary reviewed by 
Panagiotis Garefalakis, Zhihua Deng and Marton Bod) (#2813)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/hive/ql/TestAcidOnTez.java   |   9 +-
 .../hive/ql/txn/compactor/TestCompactor.java   |  14 +-
 .../hadoop/hive/ql/exec/OrcFileMergeOperator.java  |  12 +
 .../hadoop/hive/ql/hooks/PostExecOrcFileDump.java  |   6 +-
 .../hadoop/hive/ql/io/orc/FixAcidKeyIndex.java | 151 +
 .../hadoop/hive/ql/io/orc/OrcFileFormatProxy.java  |  12 +-
 .../hadoop/hive/ql/io/orc/OrcInputFormat.java  |  30 +-
 .../hadoop/hive/ql/io/orc/OrcRawRecordMerger.java  |   6 +-
 .../ql/io/orc/VectorizedOrcAcidRowBatchReader.java | 377 +++--
 .../hive/ql/io/orc/VectorizedOrcInputFormat.java   |   5 +-
 .../hive/ql/io/orc/TestInputOutputFormat.java  |  26 +-
 .../hive/ql/io/orc/TestNewInputOutputFormat.java   |  12 +-
 .../apache/hadoop/hive/ql/io/orc/TestOrcFile.java  |  11 +
 .../hive/ql/io/orc/TestOrcRecordUpdater.java   |   4 +
 .../hadoop/hive/ql/io/orc/TestOrcSerDeStats.java   |   6 +
 .../ql/txn/compactor/CompactorTestUtilities.java   |  15 +-
 .../org/apache/hive/streaming/TestStreaming.java   |  23 +-
 17 files changed, 389 insertions(+), 330 deletions(-)


[hive] branch master updated: HIVE-25683: Close reader in AcidUtils.isRawFormatFile (Peter Vary reviewed by Adam Szita) (#2774)

2021-11-09 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new d85e24c  HIVE-25683: Close reader in AcidUtils.isRawFormatFile (Peter 
Vary reviewed by Adam Szita) (#2774)
d85e24c is described below

commit d85e24c038ba4de996e39e158fe4ea3bfcf7ff9f
Author: pvary 
AuthorDate: Tue Nov 9 22:02:09 2021 +0100

HIVE-25683: Close reader in AcidUtils.isRawFormatFile (Peter Vary reviewed 
by Adam Szita) (#2774)
---
 ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
index 1870f98..0034328 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
@@ -2496,8 +2496,7 @@ public class AcidUtils {
 }
 
 public static boolean isRawFormatFile(Path dataFile, FileSystem fs) throws 
IOException {
-  try {
-Reader reader = OrcFile.createReader(dataFile, 
OrcFile.readerOptions(fs.getConf()));
+  try (Reader reader = OrcFile.createReader(dataFile, 
OrcFile.readerOptions(fs.getConf( {
 /*
   acid file would have schema like > so could
   check it this way once/if OrcRecordUpdater.ACID_KEY_INDEX_NAME is 
removed


[hive] branch master updated: HIVE-25673: Column pruning fix for MR tasks (Peter Vary reviewed by Marton Bod) (#2765)

2021-11-08 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 85b98a0  HIVE-25673: Column pruning fix for MR tasks (Peter Vary 
reviewed by Marton Bod) (#2765)
85b98a0 is described below

commit 85b98a011f98952f5c5755c7a0c036b48b2bd17a
Author: pvary 
AuthorDate: Mon Nov 8 10:56:06 2021 +0100

HIVE-25673: Column pruning fix for MR tasks (Peter Vary reviewed by Marton 
Bod) (#2765)
---
 .../iceberg/mr/hive/TestHiveIcebergSelects.java| 26 ++
 .../org/apache/iceberg/mr/hive/TestTables.java |  2 +-
 .../apache/hadoop/hive/ql/exec/MapOperator.java| 16 +++--
 3 files changed, 36 insertions(+), 8 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSelects.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSelects.java
index 84e8b57..96e5dc2 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSelects.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSelects.java
@@ -33,6 +33,7 @@ import org.apache.iceberg.types.Types;
 import org.junit.Assert;
 import org.junit.Test;
 
+import static org.apache.iceberg.types.Types.NestedField.optional;
 import static org.apache.iceberg.types.Types.NestedField.required;
 
 /**
@@ -203,4 +204,29 @@ public class TestHiveIcebergSelects extends 
HiveIcebergStorageHandlerWithEngineB
 Assert.assertArrayEquals(new Object[] {0L, "Alice", "Brown"}, rows.get(0));
 Assert.assertArrayEquals(new Object[] {1L, "Bob", "Green"}, rows.get(1));
   }
+
+  /**
+   * Column pruning could become problematic when a single Map Task contains 
multiple TableScan operators where
+   * different columns are pruned. This only occurs on MR, as Tez initializes 
a single Map task for every TableScan
+   * operator.
+   */
+  @Test
+  public void testMultiColumnPruning() throws IOException {
+shell.setHiveSessionValue("hive.cbo.enable", true);
+
+Schema schema1 = new Schema(optional(1, "fk", Types.StringType.get()));
+List records1 = 
TestHelper.RecordsBuilder.newInstance(schema1).add("fk1").build();
+testTables.createTable(shell, "table1", schema1, fileFormat, records1);
+
+Schema schema2 = new Schema(optional(1, "fk", Types.StringType.get()), 
optional(2, "val", Types.StringType.get()));
+List records2 = 
TestHelper.RecordsBuilder.newInstance(schema2).add("fk1", "val").build();
+testTables.createTable(shell, "table2", schema2, fileFormat, records2);
+
+// MR is needed for the reproduction
+shell.setHiveSessionValue("hive.execution.engine", "mr");
+String query = "SELECT t2.val FROM table1 t1 JOIN table2 t2 ON t1.fk = 
t2.fk";
+List result = shell.executeStatement(query);
+Assert.assertEquals(1, result.size());
+Assert.assertArrayEquals(new Object[]{"val"}, result.get(0));
+  }
 }
diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestTables.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestTables.java
index 5a6f38c..85ba748 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestTables.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestTables.java
@@ -437,7 +437,7 @@ abstract class TestTables {
   }
 
   Assert.assertTrue(location.delete());
-  return location.toString();
+  return "file://" + location;
 }
 
 @Override
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
index ea8e634..358dbbb 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
@@ -37,7 +37,6 @@ import org.apache.hadoop.hive.ql.CompilationOpContext;
 import org.apache.hadoop.hive.ql.exec.mr.ExecMapperContext;
 import org.apache.hadoop.hive.ql.io.AcidUtils;
 import org.apache.hadoop.hive.ql.io.RecordIdentifier;
-import org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
 import org.apache.hadoop.hive.ql.metadata.VirtualColumn;
 import org.apache.hadoop.hive.ql.plan.MapWork;
@@ -340,10 +339,10 @@ public class MapOperator extends AbstractMapOperator {
   /**
* For each source table, combine the nested column pruning information from 
all its
* table scan descriptors and set it in a configuration copy. This is 
necessary since
-   * the configuration property "READ_NESTED_COLUMN_PATH_CONF_STR" is set on a 
per-table
-   * basis, so we ca

[hive] branch master updated: HIVE-25613: Port Iceberg Hive fixes to the iceberg module (#2721)

2021-10-22 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 3088b77  HIVE-25613: Port Iceberg Hive fixes to the iceberg module 
(#2721)
3088b77 is described below

commit 3088b77ab1d0bd9fd69addf5f859b3c175e28dba
Author: pvary 
AuthorDate: Fri Oct 22 14:13:18 2021 +0200

HIVE-25613: Port Iceberg Hive fixes to the iceberg module (#2721)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Hive: Add table-level JVM lock on commits (#2547)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Core: Move ClientPool and ClientPoolImpl to core (#2491)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Hive: Improve code style (#2641)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Style: Delete blank line of CachedClientPool.java 
(#2787)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Hive: Fix some message typos in HiveCatalog: Matastore 
=> Metastore (#2950)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Hive: Fix toString NPE with recommended constructor 
(#3021)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Build: Upgrade to Gradle 7.x (#2826)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Hive: Ensure tableLevelMutex is unlocked when 
uncommitted metadata delete fails (#3264)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Doc: refactor Hive documentation with catalog loading 
examples (#2544)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Hive: unify catalog experience across engines (#2565)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Core: Add HadoopConfigurable interface to serialize 
custom FileIO (#2678)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Move Assert.assertTrue(..) instance checks to AssertJ 
assertions (#2756)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Core: Support multiple specs in OutputFileFactory 
(#2858)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - MR: Use SerializableTable in IcebergSplit (#2988)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Fix exception exception message in IcebergInputFormat 
(#3153)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Data: Fix equality deletes with date/time types (#3135)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Build: Fix ErrorProne UnnecessarilyQualified warnings 
(#3262)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Build: Fix ErrorProne NewHashMapInt warnings (#3260)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Core: Fail if both Catalog type and catalog-impl are 
configured (#3162)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - MR: Support imported data in InputFormat using name 
mapping (#3312)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Hive: Fix Catalog initialization without Configuration 
(#3252)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Hive: Switch to RetryingHMSClient (allows configuration 
of retryDelays and retries) (#3099)

* HIVE-25613: Port Iceberg Hive fixes to the iceberg module
Source Iceberg PR - Hive: Fix Catalogs.hiveCatalog method for default 
catalogs (#3338)
---
 iceberg/iceberg-catalog/pom.xml|   6 +-
 .../org/apache/iceberg/hive/CachedClientPool.java  |  16 +-
 .../java/org/apache/iceberg/hive/HiveCatalog.java  |  38 +--
 .../org/apache/iceberg/hive/HiveClientPool.java|  32 +--
 .../org/apache/iceberg/hive/HiveSchemaUtil.java|   6 +-
 .../apache/iceberg/hive/HiveTableOperations.java   |  40 ++-
 .../org/apache/iceberg/hive/TestHiveCatalog.java   |  30 +++
 .../apache/iceberg/hive/TestHiveClientPool.java|  98 
 .../apache/iceberg/hive/TestHiveCommitLocks.java   |  48 +++-
 iceberg/iceberg-handler/pom.xml|  13 +-
 .../main/java/org/apache/iceberg/mr/Catalogs.java  |  89 ---
 .../org/apache/iceberg/mr/InputFormatConfig.java   |  33 +++
 .../org/apache/iceberg/mr/hive/Deserializer.java   |   3 +-
 .../iceberg/mr/hive/HiveIcebergInputFormat.java| 

[hive] branch master updated: HIVE-25610: Handle partition field comments for Iceberg tables (Peter Vary reviewed by Marton Bod)(#2715)

2021-10-13 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 65d03fc  HIVE-25610: Handle partition field comments for Iceberg 
tables (Peter Vary reviewed by Marton Bod)(#2715)
65d03fc is described below

commit 65d03fc3a1e40709645ea22a728c8a88468994d1
Author: pvary 
AuthorDate: Wed Oct 13 12:11:15 2021 +0200

HIVE-25610: Handle partition field comments for Iceberg tables (Peter Vary 
reviewed by Marton Bod)(#2715)
---
 .../apache/iceberg/mr/hive/HiveIcebergSerDe.java   | 78 --
 .../hive/TestHiveIcebergStorageHandlerNoScan.java  | 28 +++-
 .../org/apache/hadoop/hive/ql/plan/PlanUtils.java  |  2 +
 .../apache/hadoop/hive/serde/serdeConstants.java   |  4 ++
 .../apache/hadoop/hive/serde2/AbstractSerDe.java   | 45 -
 .../hive/metastore/utils/MetaStoreUtils.java   |  2 +-
 6 files changed, 104 insertions(+), 55 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergSerDe.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergSerDe.java
index b260a2b..6bd4214 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergSerDe.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergSerDe.java
@@ -19,10 +19,8 @@
 
 package org.apache.iceberg.mr.hive;
 
-import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -31,19 +29,16 @@ import java.util.stream.Collectors;
 import java.util.stream.IntStream;
 import javax.annotation.Nullable;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hive.metastore.ColumnType;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
 import org.apache.hadoop.hive.ql.session.SessionStateUtil;
-import org.apache.hadoop.hive.serde.serdeConstants;
 import org.apache.hadoop.hive.serde2.AbstractSerDe;
 import org.apache.hadoop.hive.serde2.ColumnProjectionUtils;
 import org.apache.hadoop.hive.serde2.SerDeException;
 import org.apache.hadoop.hive.serde2.SerDeStats;
-import org.apache.hadoop.hive.serde2.SerDeUtils;
 import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
 import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
-import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils;
+import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
 import org.apache.hadoop.io.Writable;
 import org.apache.iceberg.PartitionField;
 import org.apache.iceberg.PartitionSpec;
@@ -60,6 +55,7 @@ import org.apache.iceberg.mr.InputFormatConfig;
 import org.apache.iceberg.mr.hive.serde.objectinspector.IcebergObjectInspector;
 import org.apache.iceberg.mr.mapred.Container;
 import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -70,7 +66,6 @@ public class HiveIcebergSerDe extends AbstractSerDe {
   " queryable from Hive, since HMS does not know about it.";
 
   private static final Logger LOG = 
LoggerFactory.getLogger(HiveIcebergSerDe.class);
-  private static final String LIST_COLUMN_COMMENT = "columns.comments";
 
   private ObjectInspector inspector;
   private Schema tableSchema;
@@ -81,6 +76,8 @@ public class HiveIcebergSerDe extends AbstractSerDe {
   @Override
   public void initialize(@Nullable Configuration configuration, Properties 
serDeProperties,
  Properties partitionProperties) throws SerDeException 
{
+super.initialize(configuration, serDeProperties, partitionProperties);
+
 // HiveIcebergSerDe.initialize is called multiple places in Hive code:
 // - When we are trying to create a table - HiveDDL data is stored at the 
serDeProperties, but no Iceberg table
 // is created yet.
@@ -113,7 +110,7 @@ public class HiveIcebergSerDe extends AbstractSerDe {
 // provided in the CREATE TABLE query.
 boolean autoConversion = 
configuration.getBoolean(InputFormatConfig.SCHEMA_AUTO_CONVERSION, false);
 // If we can not load the table try the provided hive schema
-this.tableSchema = hiveSchemaOrThrow(serDeProperties, e, 
autoConversion);
+this.tableSchema = hiveSchemaOrThrow(e, autoConversion);
 // This is only for table creation, it is ok to have an empty 
partition column list
 this.partitionColumns = ImmutableList.of();
 // create table for CTAS
@@ -160,15 +157,10 @@ public class HiveIcebergSerDe extends AbstractSerDe {
 serDeProperties.setProperty(InputFormatConfig.TABLE_SCHEMA, 
SchemaParse

[hive] branch master updated: HIVE-25580: Increase the performance of getTableColumnStatistics and getPartitionColumnStatistics (Peter Vary reviewed by David Mollitor and Zoltan Haindrich) (#2692)

2021-10-11 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 2d923cb  HIVE-25580: Increase the performance of 
getTableColumnStatistics and getPartitionColumnStatistics (Peter Vary reviewed 
by David Mollitor and Zoltan Haindrich) (#2692)
2d923cb is described below

commit 2d923cbd38fff830cde31d7b643a8c28d775379f
Author: pvary 
AuthorDate: Mon Oct 11 13:23:09 2021 +0200

HIVE-25580: Increase the performance of getTableColumnStatistics and 
getPartitionColumnStatistics (Peter Vary reviewed by David Mollitor and Zoltan 
Haindrich) (#2692)
---
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  8 +++--
 .../hadoop/hive/metastore/TestObjectStore.java | 35 ++
 2 files changed, 35 insertions(+), 8 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
index 164cd5b..590884c 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
@@ -9848,8 +9848,10 @@ public class ObjectStore implements RawStore, 
Configurable {
 try {
   openTransaction();
   query = pm.newQuery(MTableColumnStatistics.class);
+  query.setFilter("tableName == t1 && dbName == t2 && catName == t3");
+  query.declareParameters("java.lang.String t1, java.lang.String t2, 
java.lang.String t3");
   query.setResult("DISTINCT engine");
-  Collection names = (Collection) query.execute();
+  Collection names = (Collection) query.execute(tableName, dbName, 
catName);
   List engines = new ArrayList<>();
   for (Iterator i = names.iterator(); i.hasNext();) {
 engines.add((String) i.next());
@@ -9954,8 +9956,10 @@ public class ObjectStore implements RawStore, 
Configurable {
 try {
   openTransaction();
   query = pm.newQuery(MPartitionColumnStatistics.class);
+  query.setFilter("tableName == t1 && dbName == t2 && catName == t3");
+  query.declareParameters("java.lang.String t1, java.lang.String t2, 
java.lang.String t3");
   query.setResult("DISTINCT engine");
-  Collection names = (Collection) query.execute();
+  Collection names = (Collection) query.execute(tableName, dbName, 
catName);
   List engines = new ArrayList<>();
   for (Iterator i = names.iterator(); i.hasNext();) {
 engines.add((String) i.next());
diff --git 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestObjectStore.java
 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestObjectStore.java
index bcfac9d..379dcba 100644
--- 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestObjectStore.java
+++ 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestObjectStore.java
@@ -24,7 +24,6 @@ import com.google.common.collect.ImmutableSet;
 import org.apache.hadoop.hive.metastore.ObjectStore.RetryingExecutor;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.metastore.annotation.MetastoreUnitTest;
-import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
 import org.apache.hadoop.hive.metastore.api.Catalog;
 import org.apache.hadoop.hive.metastore.api.ColumnStatistics;
 import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
@@ -44,6 +43,7 @@ import 
org.apache.hadoop.hive.metastore.api.InvalidInputException;
 import org.apache.hadoop.hive.metastore.api.InvalidObjectException;
 import org.apache.hadoop.hive.metastore.api.ListPackageRequest;
 import org.apache.hadoop.hive.metastore.api.ListStoredProcedureRequest;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
 import org.apache.hadoop.hive.metastore.api.MetaException;
 import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
 import org.apache.hadoop.hive.metastore.api.NotificationEvent;
@@ -602,6 +602,28 @@ public class TestObjectStore {
 checkBackendTableSize("SERDES", 1); // Table has a serde
   }
 
+  @Test
+  public void testGetPartitionStatistics() throws Exception {
+createPartitionedTable(true, true);
+
+List> stat;
+try (AutoCloseable c = deadline()) {
+  stat = objectStore.getPartitionColumnStatistics(DEFAULT_CATALOG_NAME, 
DB1, TABLE1,
+  Arrays.asList("test_part_col=a0", "test_part_col=a1", 
"test_part_col=a2"),
+  Arrays.asList("test_part_col"));
+   

[hive] branch branch-3.1 updated: HIVE-24797: Disable validate default values when parsing Avro schemas (#2699)

2021-10-08 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new a7d27c6  HIVE-24797: Disable validate default values when parsing Avro 
schemas (#2699)
a7d27c6 is described below

commit a7d27c60a4ce685bbd48abbae9a1409faa243b96
Author: Jacob Ilias Komissar 
AuthorDate: Fri Oct 8 09:25:55 2021 -0400

HIVE-24797: Disable validate default values when parsing Avro schemas 
(#2699)
---
 .../org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git 
a/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java 
b/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java
index 3b96d30..7addf4b 100644
--- a/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java
+++ b/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java
@@ -276,17 +276,20 @@ public class AvroSerdeUtils {
 return dec;
   }
 
+  private static Schema.Parser getSchemaParser() {
+// HIVE-24797: Disable validate default values when parsing Avro schemas.
+return new Schema.Parser().setValidateDefaults(false);
+  }
+
   public static Schema getSchemaFor(String str) {
-Schema.Parser parser = new Schema.Parser();
-Schema schema = parser.parse(str);
+Schema schema = getSchemaParser().parse(str);
 return schema;
   }
 
   public static Schema getSchemaFor(File file) {
-Schema.Parser parser = new Schema.Parser();
 Schema schema;
 try {
-  schema = parser.parse(file);
+  schema = getSchemaParser().parse(file);
 } catch (IOException e) {
   throw new RuntimeException("Failed to parse Avro schema from " + 
file.getName(), e);
 }
@@ -294,10 +297,9 @@ public class AvroSerdeUtils {
   }
 
   public static Schema getSchemaFor(InputStream stream) {
-Schema.Parser parser = new Schema.Parser();
 Schema schema;
 try {
-  schema = parser.parse(stream);
+  schema = getSchemaParser().parse(stream);
 } catch (IOException e) {
   throw new RuntimeException("Failed to parse Avro schema", e);
 }


[hive] branch branch-3 updated: HIVE-24797: Disable validate default values when parsing Avro schemas (#2670)

2021-10-06 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-3 by this push:
 new 397e2f5  HIVE-24797: Disable validate default values when parsing Avro 
schemas (#2670)
397e2f5 is described below

commit 397e2f530fd7b778399c689ceaf65fbbd6e951f8
Author: Jacob Ilias Komissar 
AuthorDate: Wed Oct 6 10:57:39 2021 -0400

HIVE-24797: Disable validate default values when parsing Avro schemas 
(#2670)
---
 .../org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git 
a/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java 
b/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java
index d16abdb..e0e7392 100644
--- a/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java
+++ b/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java
@@ -276,17 +276,20 @@ public class AvroSerdeUtils {
 return dec;
   }
 
+  private static Schema.Parser getSchemaParser() {
+// HIVE-24797: Disable validate default values when parsing Avro schemas.
+return new Schema.Parser().setValidateDefaults(false);
+  }
+
   public static Schema getSchemaFor(String str) {
-Schema.Parser parser = new Schema.Parser();
-Schema schema = parser.parse(str);
+Schema schema = getSchemaParser().parse(str);
 return schema;
   }
 
   public static Schema getSchemaFor(File file) {
-Schema.Parser parser = new Schema.Parser();
 Schema schema;
 try {
-  schema = parser.parse(file);
+  schema = getSchemaParser().parse(file);
 } catch (IOException e) {
   throw new RuntimeException("Failed to parse Avro schema from " + 
file.getName(), e);
 }
@@ -294,10 +297,9 @@ public class AvroSerdeUtils {
   }
 
   public static Schema getSchemaFor(InputStream stream) {
-Schema.Parser parser = new Schema.Parser();
 Schema schema;
 try {
-  schema = parser.parse(stream);
+  schema = getSchemaParser().parse(stream);
 } catch (IOException e) {
   throw new RuntimeException("Failed to parse Avro schema", e);
 }


[hive] branch master updated (94a20f8 -> 2b30ced)

2021-10-05 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 94a20f8  HIVE-25577: unix_timestamp() is ignoring the time zone value 
(Ashish Sharma, reviewed by Sankar Hariappan)
 add 2b30ced  HIVE-23633: Close Metastore JDO query objects properly 
(Zhihua Deng reviewed by David Mollitor and Peter Vary) (#2344)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hive/metastore/MetaStoreDirectSql.java  | 742 ++---
 .../apache/hadoop/hive/metastore/ObjectStore.java  | 504 +++---
 .../apache/hadoop/hive/metastore/QueryWrapper.java | 430 
 3 files changed, 1001 insertions(+), 675 deletions(-)
 create mode 100644 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/QueryWrapper.java


[hive] branch master updated (a20ee05 -> 12e25d5)

2021-09-07 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from a20ee05  HIVE-25404: Inserts inside merge statements are rewritten 
incorrectly for partitioned tables (Zoltan Haindrich reviewed by Krisztian Kasa)
 add 12e25d5  HIVE-25504: Fix HMS C++ Thrift client compilation (Peter Vary 
reviewed by Adam Szita) (#2621)

No new revisions were added by this update.

Summary of changes:
 .../gen/thrift/gen-cpp/hive_metastore_types.cpp| 3144 ++--
 .../src/gen/thrift/gen-cpp/hive_metastore_types.h  |   88 +-
 .../hadoop/hive/metastore/api/AggrStats.java   |   36 +-
 .../hive/metastore/api/ColumnStatistics.java   |   36 +-
 .../hive/metastore/api/CreationMetadata.java   |   32 +-
 .../apache/hadoop/hive/metastore/api/Database.java |   44 +-
 .../hive/metastore/api/EnvironmentContext.java |   44 +-
 .../hadoop/hive/metastore/api/FileMetadata.java|   32 +-
 .../hive/metastore/api/GetCatalogsResponse.java|   32 +-
 .../metastore/api/GetPrincipalsInRoleResponse.java |   36 +-
 .../api/GetRoleGrantsForPrincipalResponse.java |   36 +-
 .../hadoop/hive/metastore/api/HiveObjectRef.java   |   32 +-
 .../hive/metastore/api/ObjectDictionary.java   |   72 +-
 .../hadoop/hive/metastore/api/Partition.java   |   76 +-
 .../metastore/api/PartitionListComposingSpec.java  |   36 +-
 .../metastore/api/PartitionSpecWithSharedSD.java   |   36 +-
 .../hive/metastore/api/PartitionWithoutSD.java |   76 +-
 .../hive/metastore/api/PrincipalPrivilegeSet.java  |  228 +-
 .../hadoop/hive/metastore/api/PrivilegeBag.java|   36 +-
 .../hive/metastore/api/SQLAllTableConstraints.java |  216 +-
 .../apache/hadoop/hive/metastore/api/Schema.java   |   80 +-
 .../hadoop/hive/metastore/api/SerDeInfo.java   |   44 +-
 .../metastore/api/SetPartitionsStatsRequest.java   |   36 +-
 .../hadoop/hive/metastore/api/SkewedInfo.java  |  164 +-
 .../hive/metastore/api/StorageDescriptor.java  |  148 +-
 .../apache/hadoop/hive/metastore/api/Table.java|  144 +-
 .../hive/metastore/api/TruncateTableRequest.java   |   37 +-
 .../org/apache/hadoop/hive/metastore/api/Type.java |   36 +-
 .../src/gen/thrift/gen-php/metastore/AggrStats.php |   20 +-
 .../thrift/gen-php/metastore/ColumnStatistics.php  |   20 +-
 .../thrift/gen-php/metastore/CreationMetadata.php  |   18 +-
 .../src/gen/thrift/gen-php/metastore/Database.php  |   26 +-
 .../gen-php/metastore/EnvironmentContext.php   |   26 +-
 .../gen/thrift/gen-php/metastore/FileMetadata.php  |   18 +-
 .../gen-php/metastore/GetCatalogsResponse.php  |   18 +-
 .../metastore/GetPrincipalsInRoleResponse.php  |   20 +-
 .../GetRoleGrantsForPrincipalResponse.php  |   20 +-
 .../gen/thrift/gen-php/metastore/HiveObjectRef.php |   18 +-
 .../thrift/gen-php/metastore/ObjectDictionary.php  |   44 +-
 .../src/gen/thrift/gen-php/metastore/Partition.php |   44 +-
 .../metastore/PartitionListComposingSpec.php   |   20 +-
 .../metastore/PartitionSpecWithSharedSD.php|   20 +-
 .../gen-php/metastore/PartitionWithoutSD.php   |   44 +-
 .../gen-php/metastore/PrincipalPrivilegeSet.php|  138 +-
 .../gen/thrift/gen-php/metastore/PrivilegeBag.php  |   20 +-
 .../gen-php/metastore/SQLAllTableConstraints.php   |  120 +-
 .../src/gen/thrift/gen-php/metastore/Schema.php|   46 +-
 .../src/gen/thrift/gen-php/metastore/SerDeInfo.php |   26 +-
 .../metastore/SetPartitionsStatsRequest.php|   20 +-
 .../gen/thrift/gen-php/metastore/SkewedInfo.php|   98 +-
 .../thrift/gen-php/metastore/StorageDescriptor.php |   84 +-
 .../src/gen/thrift/gen-php/metastore/Table.php |   82 +-
 .../gen-php/metastore/TruncateTableRequest.php |   18 +-
 .../src/gen/thrift/gen-php/metastore/Type.php  |   20 +-
 .../src/gen/thrift/gen-py/hive_metastore/ttypes.py |  818 ++---
 .../src/gen/thrift/gen-rb/hive_metastore_types.rb  |   36 +-
 .../src/main/thrift/hive_metastore.thrift  |   16 +-
 57 files changed, 3489 insertions(+), 3486 deletions(-)


[hive] branch master updated: Disable flaky test

2021-08-31 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 36ad119  Disable flaky test
36ad119 is described below

commit 36ad119234cf6336cb2563cbe6da60309bfb169b
Author: Peter Vary 
AuthorDate: Tue Aug 31 13:50:21 2021 +0200

Disable flaky test
---
 .../hive/ql/parse/TestReplicationScenariosIncrementalLoadAcidTables.java | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosIncrementalLoadAcidTables.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosIncrementalLoadAcidTables.java
index 3de64bf..575ffc1 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosIncrementalLoadAcidTables.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosIncrementalLoadAcidTables.java
@@ -130,6 +130,7 @@ public class 
TestReplicationScenariosIncrementalLoadAcidTables {
   }
 
   @Test
+  @org.junit.Ignore("HIVE-25491")
   public void testAcidTableIncrementalReplication() throws Throwable {
 WarehouseInstance.Tuple bootStrapDump = primary.dump(primaryDbName);
 replica.load(replicatedDbName, primaryDbName)


[hive] branch master updated: HIVE-25480: Fix Time Travel with CBO (#2602) (Peter Vary reviewed by Adam Szita)

2021-08-31 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 5b30cd8  HIVE-25480: Fix Time Travel with CBO (#2602) (Peter Vary 
reviewed by Adam Szita)
5b30cd8 is described below

commit 5b30cd879b8ef5a4aecbfcaf4366cb8608222909
Author: pvary 
AuthorDate: Tue Aug 31 10:42:36 2021 +0200

HIVE-25480: Fix Time Travel with CBO (#2602) (Peter Vary reviewed by Adam 
Szita)
---
 .../mr/hive/TestHiveIcebergSchemaEvolution.java|  7 +++---
 .../org/apache/iceberg/mr/hive/TestHiveShell.java  |  2 +-
 .../apache/hadoop/hive/ql/io/HiveInputFormat.java  | 16 +---
 .../org/apache/hadoop/hive/ql/metadata/Table.java  | 22 ++--
 .../optimizer/calcite/translator/ASTBuilder.java   | 13 ++
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  3 ++-
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 11 ++--
 .../apache/hadoop/hive/ql/plan/TableScanDesc.java  | 29 +-
 8 files changed, 55 insertions(+), 48 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSchemaEvolution.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSchemaEvolution.java
index 96d7850..9b3c941 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSchemaEvolution.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergSchemaEvolution.java
@@ -127,7 +127,7 @@ public class TestHiveIcebergSchemaEvolution extends 
HiveIcebergStorageHandlerWit
 shell.executeStatement("ALTER TABLE orders CHANGE COLUMN " +
 "item fruit string");
 List result = shell.executeStatement("SELECT 
customer_first_name, customer_last_name, SUM(quantity) " +
-"FROM orders where price >= 3 group by customer_first_name, 
customer_last_name");
+"FROM orders where price >= 3 group by customer_first_name, 
customer_last_name order by customer_first_name");
 
 assertQueryResult(result, 4,
 "Doctor", "Strange", 900L,
@@ -140,7 +140,8 @@ public class TestHiveIcebergSchemaEvolution extends 
HiveIcebergStorageHandlerWit
 shell.executeStatement("ALTER TABLE orders ADD COLUMNS (nickname string)");
 shell.executeStatement("INSERT INTO orders VALUES (7, 'Romanoff', 
'Natasha', 3, 250, 'apple', 'Black Widow')");
 result = shell.executeStatement("SELECT customer_first_name, 
customer_last_name, nickname, SUM(quantity) " +
-" FROM orders where price >= 3 group by customer_first_name, 
customer_last_name, nickname");
+" FROM orders where price >= 3 group by customer_first_name, 
customer_last_name, nickname " +
+" order by customer_first_name");
 assertQueryResult(result, 5,
 "Doctor", "Strange", null, 900L,
 "Natasha", "Romanoff", "Black Widow", 250L,
@@ -152,7 +153,7 @@ public class TestHiveIcebergSchemaEvolution extends 
HiveIcebergStorageHandlerWit
 shell.executeStatement("ALTER TABLE orders CHANGE COLUMN fruit fruit 
string AFTER nickname");
 result = shell.executeStatement("SELECT customer_first_name, 
customer_last_name, nickname, fruit, SUM(quantity) " +
 " FROM orders where price >= 3 and fruit < 'o' group by 
customer_first_name, customer_last_name, nickname, " +
-"fruit");
+"fruit order by customer_first_name");
 assertQueryResult(result, 4,
 "Doctor", "Strange", null, "apple", 100L,
 "Natasha", "Romanoff", "Black Widow", "apple", 250L,
diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveShell.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveShell.java
index b3c9440..3d39889 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveShell.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveShell.java
@@ -185,7 +185,7 @@ public class TestHiveShell {
 hiveConf.setIntVar(HiveConf.ConfVars.HIVE_SERVER2_WEBUI_PORT, -1);
 
 // Switch off optimizers in order to contain the map reduction within this 
JVM
-hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_CBO_ENABLED, false);
+hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_CBO_ENABLED, true);
 hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_INFER_BUCKET_SORT, false);
 hiveConf.setBoolVar(HiveConf.ConfVars.HIVEMETADATAONLYQUERIES, false);
 hiveConf.setBoolVar(HiveConf.ConfVars.HIVEOPTINDEXFILTER, false);
diff --git a

[hive] branch master updated (920f373 -> 32b539f)

2021-08-27 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 920f373  HIVE-25407: Advance Write ID during ALTER TABLE (NOT SKEWED, 
SKEWED BY, SET SKEWED LOCATION, UNSET SERDEPROPERTIES) (Kishen Das via Peter 
Vary)
 add 32b539f  HIVE-25461: Add a test case to ensure truncate table advances 
the write id (Kishen Das via Peter Vary)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/hive/ql/TestTxnCommands.java | 23 ++
 1 file changed, 23 insertions(+)


[hive] branch master updated (5c53f92 -> 920f373)

2021-08-27 Thread pvary
This is an automated email from the ASF dual-hosted git repository.

pvary pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 5c53f92  HIVE-25475: 
TestStatsReplicationScenarios.testForParallelBootstrapLoad is unstable (Haymant 
Mangla, reviewed by Mahesh Kumar Behera)
 add 920f373  HIVE-25407: Advance Write ID during ALTER TABLE (NOT SKEWED, 
SKEWED BY, SET SKEWED LOCATION, UNSET SERDEPROPERTIES) (Kishen Das via Peter 
Vary)

No new revisions were added by this update.

Summary of changes:
 .../serde/AlterTableUnsetSerdePropsAnalyzer.java|  1 +
 .../serde/AlterTableUnsetSerdePropsDesc.java|  2 +-
 .../storage/skewed/AlterTableNotSkewedDesc.java |  2 +-
 .../skewed/AlterTableSetSkewedLocationAnalyzer.java |  1 +
 .../skewed/AlterTableSetSkewedLocationDesc.java |  2 +-
 .../storage/skewed/AlterTableSkewedByAnalyzer.java  | 10 +++---
 .../storage/skewed/AlterTableSkewedByDesc.java  |  2 +-
 .../org/apache/hadoop/hive/ql/TestTxnCommands.java  | 21 +++--
 8 files changed, 32 insertions(+), 9 deletions(-)


  1   2   3   4   5   >