I will subscript to this

2023-04-27 Thread Carlos Guerrero



[hive] branch master updated: HIVE-27271: Client connection to HS2 fails when transportMode=http, ssl=true, sslTrustStore specified without trustStorePassword in the JDBC URL

2023-04-27 Thread dengzh
This is an automated email from the ASF dual-hosted git repository.

dengzh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 36bd69ee10c HIVE-27271: Client connection to HS2 fails when 
transportMode=http, ssl=true, sslTrustStore specified without 
trustStorePassword in the JDBC URL
36bd69ee10c is described below

commit 36bd69ee10cce13ab42a750f0577f53f85f28ca7
Author: Venu Reddy <35334869+venureddy2...@users.noreply.github.com>
AuthorDate: Fri Apr 28 04:51:15 2023 +0530

HIVE-27271: Client connection to HS2 fails when transportMode=http, 
ssl=true, sslTrustStore specified without trustStorePassword in the JDBC URL

Closes #4262
---
 jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java 
b/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
index fc7542754eb..3865d7b530c 100644
--- a/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
+++ b/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
@@ -803,7 +803,7 @@ public class HiveConnection implements java.sql.Connection {
   }
   sslTrustStore = KeyStore.getInstance(trustStoreType);
   try (FileInputStream fis = new FileInputStream(sslTrustStorePath)) {
-sslTrustStore.load(fis, sslTrustStorePassword.toCharArray());
+sslTrustStore.load(fis, sslTrustStorePassword != null ? 
sslTrustStorePassword.toCharArray() : null);
   }
   sslContext = SSLContexts.custom().loadTrustMaterial(sslTrustStore, 
null).build();
   socketFactory =
@@ -1035,7 +1035,7 @@ public class HiveConnection implements 
java.sql.Connection {
 + " Not configured for 2 way SSL connection");
   }
   try (FileInputStream fis = new FileInputStream(trustStorePath)) {
-sslTrustStore.load(fis, trustStorePassword.toCharArray());
+sslTrustStore.load(fis, trustStorePassword != null ? 
trustStorePassword.toCharArray() : null);
   }
   trustManagerFactory.init(sslTrustStore);
   SSLContext context = SSLContext.getInstance("TLS");



[hive] branch branch-3 updated (ecc9c9cf7b1 -> 5c8ae7bb0be)

2023-04-27 Thread sankarh
This is an automated email from the ASF dual-hosted git repository.

sankarh pushed a change to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hive.git


from ecc9c9cf7b1 HIVE-24653: Race condition between compactor marker 
generation and get splits (Antal Sinkovits, reviewed by Laszlo Pinter) (#4219)
 add 5c8ae7bb0be HIVE-27247: Backport of HIVE-24436: Fix Avro 
NULL_DEFAULT_VALUE compatibility issue and HIVE-19662: Upgrade Avro to 1.8.2 
(#4218)

No new revisions were added by this update.

Summary of changes:
 hbase-handler/pom.xml| 4 ++--
 pom.xml  | 2 +-
 .../java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java| 5 +++--
 3 files changed, 6 insertions(+), 5 deletions(-)



[hive] branch branch-3 updated (ac7631680ef -> ecc9c9cf7b1)

2023-04-27 Thread sankarh
This is an automated email from the ASF dual-hosted git repository.

sankarh pushed a change to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hive.git


from ac7631680ef HIVE-27058: Backport of HIVE-24316: ORC upgrade to 1.5.8 
and HIVE-24391: TestORCFile fix (#4192)
 add ecc9c9cf7b1 HIVE-24653: Race condition between compactor marker 
generation and get splits (Antal Sinkovits, reviewed by Laszlo Pinter) (#4219)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hive/ql/txn/compactor/CompactorMR.java   | 21 ++---
 1 file changed, 10 insertions(+), 11 deletions(-)



[hive] branch branch-3 updated (7b2b35a4ead -> ac7631680ef)

2023-04-27 Thread sankarh
This is an automated email from the ASF dual-hosted git repository.

sankarh pushed a change to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hive.git


from 7b2b35a4ead HIVE-27282 : Backport of HIVE-21717 : Rename is failing 
for directory in move task (Aman Raj reviewed by Vihang Karajgaonkar)
 add ac7631680ef HIVE-27058: Backport of HIVE-24316: ORC upgrade to 1.5.8 
and HIVE-24391: TestORCFile fix (#4192)

No new revisions were added by this update.

Summary of changes:
 pom.xml|  2 +-
 .../apache/hadoop/hive/ql/io/orc/TestOrcFile.java  | 81 --
 .../hive/ql/io/orc/TestOrcRawRecordMerger.java | 40 ++-
 3 files changed, 36 insertions(+), 87 deletions(-)



[hive] branch master updated: HIVE-27266: Retrieve only partNames if not need drop data in HMSHandler.dropPartitionsAndGetLocations (Wechar Yu, reviewed by Attila Turoczy, Denys Kuzmenko)

2023-04-27 Thread dkuzmenko
This is an automated email from the ASF dual-hosted git repository.

dkuzmenko pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 4d9fdf211a7 HIVE-27266: Retrieve only partNames if not need drop data 
in HMSHandler.dropPartitionsAndGetLocations (Wechar Yu, reviewed by Attila 
Turoczy, Denys Kuzmenko)
4d9fdf211a7 is described below

commit 4d9fdf211a71f67457302446d78f1183a44f074d
Author: Wechar Yu 
AuthorDate: Thu Apr 27 23:05:25 2023 +0800

HIVE-27266: Retrieve only partNames if not need drop data in 
HMSHandler.dropPartitionsAndGetLocations (Wechar Yu, reviewed by Attila 
Turoczy, Denys Kuzmenko)

Closes #4238
---
 .../apache/hadoop/hive/metastore/HMSHandler.java   | 24 -
 .../hadoop/hive/metastore/tools/BenchmarkTool.java |  5 +
 .../hadoop/hive/metastore/tools/HMSBenchmarks.java | 25 ++
 .../hadoop/hive/metastore/tools/HMSClient.java |  7 +-
 4 files changed, 50 insertions(+), 11 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
index 15bb8d82245..308396ef3fe 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
@@ -3132,15 +3132,12 @@ public class HMSHandler extends FacebookBase implements 
IHMSHandler {
 
 List partPaths = new ArrayList<>();
 while (true) {
-  Map partitionLocations = 
ms.getPartitionLocations(catName, dbName, tableName,
-  tableDnsPath, batchSize);
-  if (partitionLocations == null || partitionLocations.isEmpty()) {
-// No more partitions left to drop. Return with the collected path 
list to delete.
-return partPaths;
-  }
-
+  List partNames;
   if (checkLocation) {
-for (String partName : partitionLocations.keySet()) {
+Map partitionLocations = 
ms.getPartitionLocations(catName, dbName, tableName,
+tableDnsPath, batchSize);
+partNames = new ArrayList<>(partitionLocations.keySet());
+for (String partName : partNames) {
   String pathString = partitionLocations.get(partName);
   if (pathString != null) {
 Path partPath = wh.getDnsPath(new Path(pathString));
@@ -3157,19 +3154,26 @@ public class HMSHandler extends FacebookBase implements 
IHMSHandler {
 }
   }
 }
+  } else {
+partNames = ms.listPartitionNames(catName, dbName, tableName, (short) 
batchSize);
+  }
+
+  if (partNames == null || partNames.isEmpty()) {
+// No more partitions left to drop. Return with the collected path 
list to delete.
+return partPaths;
   }
 
   for (MetaStoreEventListener listener : listeners) {
 //No drop part listener events fired for public listeners 
historically, for drop table case.
 //Limiting to internal listeners for now, to avoid unexpected calls 
for public listeners.
 if (listener instanceof HMSMetricsListener) {
-  for (@SuppressWarnings("unused") String partName : 
partitionLocations.keySet()) {
+  for (@SuppressWarnings("unused") String partName : partNames) {
 listener.onDropPartition(null);
   }
 }
   }
 
-  ms.dropPartitions(catName, dbName, tableName, new 
ArrayList<>(partitionLocations.keySet()));
+  ms.dropPartitions(catName, dbName, tableName, partNames);
 }
   }
 
diff --git 
a/standalone-metastore/metastore-tools/metastore-benchmarks/src/main/java/org/apache/hadoop/hive/metastore/tools/BenchmarkTool.java
 
b/standalone-metastore/metastore-tools/metastore-benchmarks/src/main/java/org/apache/hadoop/hive/metastore/tools/BenchmarkTool.java
index 943b87e9c97..025b39339bc 100644
--- 
a/standalone-metastore/metastore-tools/metastore-benchmarks/src/main/java/org/apache/hadoop/hive/metastore/tools/BenchmarkTool.java
+++ 
b/standalone-metastore/metastore-tools/metastore-benchmarks/src/main/java/org/apache/hadoop/hive/metastore/tools/BenchmarkTool.java
@@ -48,6 +48,7 @@ import static 
org.apache.hadoop.hive.metastore.tools.Constants.HMS_DEFAULT_PORT;
 import static 
org.apache.hadoop.hive.metastore.tools.HMSBenchmarks.benchmarkCreatePartition;
 import static 
org.apache.hadoop.hive.metastore.tools.HMSBenchmarks.benchmarkCreatePartitions;
 import static 
org.apache.hadoop.hive.metastore.tools.HMSBenchmarks.benchmarkDeleteCreate;
+import static 
org.apache.hadoop.hive.metastore.tools.HMSBenchmarks.benchmarkDeleteMetaOnlyWithPartitions;
 import static 
org.apache.hadoop.hive.metastore.tools.HMSBenchmarks.benchmarkDeleteWithPartitions;
 import static 

[hive] branch master updated (6331e982e21 -> c6d32302f11)

2023-04-27 Thread dkuzmenko
This is an automated email from the ASF dual-hosted git repository.

dkuzmenko pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from 6331e982e21 HIVE-27273: Iceberg: Upgrade iceberg to 1.2.1 (Butao 
Zhang, reviewed by Peter Vary, Zsolt Miskolczi, Denys Kuzmenko)
 add c6d32302f11 HIVE-27287: Upgrade Commons-text to 1.10.0 to fix CVE 
(Raghav Aggarwal, reviewed by Denys Kuzmenko, Attila Turoczy)

No new revisions were added by this update.

Summary of changes:
 pom.xml| 2 +-
 ql/pom.xml | 1 -
 2 files changed, 1 insertion(+), 2 deletions(-)



[hive] branch master updated (466fdaa2834 -> 6331e982e21)

2023-04-27 Thread dkuzmenko
This is an automated email from the ASF dual-hosted git repository.

dkuzmenko pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from 466fdaa2834 HIVE-27290 : Upgrade json-path to 2.8.0 (Reshma Fegade, 
reviewed by Laszlo Vegh)
 add 6331e982e21 HIVE-27273: Iceberg: Upgrade iceberg to 1.2.1 (Butao 
Zhang, reviewed by Peter Vary, Zsolt Miskolczi, Denys Kuzmenko)

No new revisions were added by this update.

Summary of changes:
 .../iceberg/mr/hive/HiveIcebergFilterFactory.java  |   6 +
 .../mr/hive/TestHiveIcebergFilterFactory.java  |  87 
 .../describe_iceberg_metadata_tables.q.out |  18 +
 .../positive/dynamic_partition_writes.q.out|  60 +--
 ...y_iceberg_metadata_of_unpartitioned_table.q.out | Bin 34884 -> 39832 bytes
 iceberg/patched-iceberg-core/pom.xml   |   1 -
 .../apache/iceberg/BaseUpdatePartitionSpec.java| 555 -
 iceberg/pom.xml|   2 +-
 8 files changed, 142 insertions(+), 587 deletions(-)
 delete mode 100644 
iceberg/patched-iceberg-core/src/main/java/org/apache/iceberg/BaseUpdatePartitionSpec.java



[hive] branch master updated (d01b1860c42 -> 466fdaa2834)

2023-04-27 Thread veghlaci05
This is an automated email from the ASF dual-hosted git repository.

veghlaci05 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from d01b1860c42 HIVE-27197: Iceberg:Support Iceberg version travel by 
reference name (Butao Zhang, reviewed by Denys Kuzmenko)
 add 466fdaa2834 HIVE-27290 : Upgrade json-path to 2.8.0 (Reshma Fegade, 
reviewed by Laszlo Vegh)

No new revisions were added by this update.

Summary of changes:
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[hive] branch master updated: HIVE-27197: Iceberg:Support Iceberg version travel by reference name (Butao Zhang, reviewed by Denys Kuzmenko)

2023-04-27 Thread dkuzmenko
This is an automated email from the ASF dual-hosted git repository.

dkuzmenko pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new d01b1860c42 HIVE-27197: Iceberg:Support Iceberg version travel by 
reference name (Butao Zhang, reviewed by Denys Kuzmenko)
d01b1860c42 is described below

commit d01b1860c42f3d61009e1c23d4947bc74138ad0f
Author: Butao Zhang 
AuthorDate: Thu Apr 27 22:05:41 2023 +0800

HIVE-27197: Iceberg:Support Iceberg version travel by reference name (Butao 
Zhang, reviewed by Denys Kuzmenko)

Closes #4173
---
 .../iceberg/mr/mapreduce/IcebergInputFormat.java   | 13 +-
 .../iceberg/mr/hive/TestHiveIcebergTimeTravel.java | 30 ++
 .../apache/hadoop/hive/ql/parse/FromClauseParser.g |  2 +-
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 30 --
 4 files changed, 60 insertions(+), 15 deletions(-)

diff --git 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/mapreduce/IcebergInputFormat.java
 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/mapreduce/IcebergInputFormat.java
index 4ad91ea6858..3e32c11e6d3 100644
--- 
a/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/mapreduce/IcebergInputFormat.java
+++ 
b/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/mapreduce/IcebergInputFormat.java
@@ -56,6 +56,7 @@ import org.apache.iceberg.Scan;
 import org.apache.iceberg.Schema;
 import org.apache.iceberg.SchemaParser;
 import org.apache.iceberg.SerializableTable;
+import org.apache.iceberg.SnapshotRef;
 import org.apache.iceberg.StructLike;
 import org.apache.iceberg.Table;
 import org.apache.iceberg.TableProperties;
@@ -114,7 +115,17 @@ public class IcebergInputFormat extends 
InputFormat {
   private static TableScan createTableScan(Table table, Configuration conf) {
 TableScan scan = table.newScan();
 
-long snapshotId = conf.getLong(InputFormatConfig.SNAPSHOT_ID, -1);
+long snapshotId = -1;
+try {
+  snapshotId = conf.getLong(InputFormatConfig.SNAPSHOT_ID, -1);
+} catch (NumberFormatException e) {
+  String version = conf.get(InputFormatConfig.SNAPSHOT_ID);
+  SnapshotRef ref = table.refs().get(version);
+  if (ref == null) {
+throw new RuntimeException("Cannot find matching snapshot ID or 
reference name for version " + version);
+  }
+  snapshotId = ref.snapshotId();
+}
 if (snapshotId != -1) {
   scan = scan.useSnapshot(snapshotId);
 }
diff --git 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergTimeTravel.java
 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergTimeTravel.java
index be865817d13..7c64eb68c93 100644
--- 
a/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergTimeTravel.java
+++ 
b/iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergTimeTravel.java
@@ -87,6 +87,36 @@ public class TestHiveIcebergTimeTravel extends 
HiveIcebergStorageHandlerWithEngi
 }
   }
 
+  @Test
+  public void testSelectAsOfBranchReference() throws IOException, 
InterruptedException {
+Table table = testTables.createTableWithVersions(shell, "customers",
+HiveIcebergStorageHandlerTestUtils.CUSTOMER_SCHEMA,
+fileFormat, HiveIcebergStorageHandlerTestUtils.CUSTOMER_RECORDS, 2);
+
+long firstSnapshotId = table.history().get(0).snapshotId();
+table.manageSnapshots().createBranch("main_branch", 
firstSnapshotId).commit();
+List rows =
+shell.executeStatement("SELECT * FROM customers FOR SYSTEM_VERSION AS 
OF 'main_branch'");
+
+Assert.assertEquals(3, rows.size());
+
+long secondSnapshotId = table.history().get(1).snapshotId();
+table.manageSnapshots().createBranch("test_branch", 
secondSnapshotId).commit();
+rows = shell.executeStatement("SELECT * FROM customers FOR SYSTEM_VERSION 
AS OF 'test_branch'");
+
+Assert.assertEquals(4, rows.size());
+
+try {
+  shell.executeStatement("SELECT * FROM customers FOR SYSTEM_VERSION AS OF 
'unknown_branch'");
+} catch (Throwable e) {
+  while (e.getCause() != null) {
+e = e.getCause();
+  }
+  Assert.assertTrue(e.getMessage().contains("Cannot find matching snapshot 
ID or reference name for " +
+  "version unknown_branch"));
+}
+  }
+
   @Test
   public void testCTASAsOfVersionAndTimestamp() throws IOException, 
InterruptedException {
 Table table = testTables.createTableWithVersions(shell, "customers",
diff --git a/parser/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g 
b/parser/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g
index abeb38305dc..c1dc2224274 100644
--- a/parser/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g
+++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g
@@ -220,7 +220,7 @@ 

[hive] branch master updated: HIVE-26779: UNION ALL throws SemanticException when trying to remove partition predicates: fail to find child from parent (Krisztian Kasa, reviewed by Denys Kuzmenko, Att

2023-04-27 Thread krisztiankasa
This is an automated email from the ASF dual-hosted git repository.

krisztiankasa pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new e9372da6e66 HIVE-26779: UNION ALL throws SemanticException when trying 
to remove partition predicates: fail to find child from parent (Krisztian Kasa, 
reviewed by Denys Kuzmenko, Attila Turoczy)
e9372da6e66 is described below

commit e9372da6e666b73540b592a9c1c6dbdf7db83e43
Author: Krisztian Kasa 
AuthorDate: Thu Apr 27 13:49:41 2023 +0200

HIVE-26779: UNION ALL throws SemanticException when trying to remove 
partition predicates: fail to find child from parent (Krisztian Kasa, reviewed 
by Denys Kuzmenko, Attila Turoczy)
---
 .../apache/hadoop/hive/ql/parse/GenTezUtils.java   |   1 +
 .../queries/clientpositive/lateral_view_unionall.q |  29 +++
 .../llap/lateral_view_unionall.q.out   | 225 +
 3 files changed, 255 insertions(+)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java
index ddf0be3e0a4..8da598705d4 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java
@@ -445,6 +445,7 @@ public class GenTezUtils {
   replacementMap.put(current, current.getChildOperators().get(0));
 } else {
   parent.removeChildAndAdoptItsChildren(current);
+  operators.remove(current);
 }
   }
 
diff --git a/ql/src/test/queries/clientpositive/lateral_view_unionall.q 
b/ql/src/test/queries/clientpositive/lateral_view_unionall.q
new file mode 100644
index 000..d4591e8fdcd
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/lateral_view_unionall.q
@@ -0,0 +1,29 @@
+create table tez_test_t1(md_exper string);
+insert into tez_test_t1 values('tez_test_t1-md_expr');
+
+create table tez_test_t5(md_exper string, did string);
+insert into tez_test_t5 values('tez_test_t5-md_expr','tez_test_t5-did');
+
+create table tez_test_t2(did string);
+insert into tez_test_t2 values('tez_test_t2-did');
+
+explain
+SELECT NULL AS first_login_did
+   FROM tez_test_t5
+   LATERAL VIEW explode(split('0,6', ',')) gaps AS ads_h5_gap
+UNION ALL
+SELECT  null as first_login_did
+FROM tez_test_t1
+UNION ALL
+   SELECT did AS first_login_did
+   FROM tez_test_t2;
+
+SELECT NULL AS first_login_did
+   FROM tez_test_t5
+   LATERAL VIEW explode(split('0,6', ',')) gaps AS ads_h5_gap
+UNION ALL
+SELECT  null as first_login_did
+FROM tez_test_t1
+UNION ALL
+   SELECT did AS first_login_did
+   FROM tez_test_t2;
diff --git 
a/ql/src/test/results/clientpositive/llap/lateral_view_unionall.q.out 
b/ql/src/test/results/clientpositive/llap/lateral_view_unionall.q.out
new file mode 100644
index 000..04f260d86be
--- /dev/null
+++ b/ql/src/test/results/clientpositive/llap/lateral_view_unionall.q.out
@@ -0,0 +1,225 @@
+PREHOOK: query: create table tez_test_t1(md_exper string)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tez_test_t1
+POSTHOOK: query: create table tez_test_t1(md_exper string)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tez_test_t1
+PREHOOK: query: insert into tez_test_t1 values('tez_test_t1-md_expr')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@tez_test_t1
+POSTHOOK: query: insert into tez_test_t1 values('tez_test_t1-md_expr')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@tez_test_t1
+POSTHOOK: Lineage: tez_test_t1.md_exper SCRIPT []
+PREHOOK: query: create table tez_test_t5(md_exper string, did string)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tez_test_t5
+POSTHOOK: query: create table tez_test_t5(md_exper string, did string)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tez_test_t5
+PREHOOK: query: insert into tez_test_t5 
values('tez_test_t5-md_expr','tez_test_t5-did')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@tez_test_t5
+POSTHOOK: query: insert into tez_test_t5 
values('tez_test_t5-md_expr','tez_test_t5-did')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@tez_test_t5
+POSTHOOK: Lineage: tez_test_t5.did SCRIPT []
+POSTHOOK: Lineage: tez_test_t5.md_exper SCRIPT []
+PREHOOK: query: create table tez_test_t2(did string)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tez_test_t2
+POSTHOOK: query: create table tez_test_t2(did string)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tez_test_t2
+PREHOOK: query: insert into tez_test_t2 values('tez_test_t2-did')
+PREHOOK: 

[hive] branch master updated (d2ce078f2d8 -> 46e3ae1b7fe)

2023-04-27 Thread dkuzmenko
This is an automated email from the ASF dual-hosted git repository.

dkuzmenko pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from d2ce078f2d8 HIVE-27295: Improve docker logging in AbstractExternalDB 
and DatabaseRule (#4268) (Laszlo Bodor reviewed by Stamatis Zampetakis)
 add 46e3ae1b7fe HIVE-27300: Upgrade Parquet to 1.13.0 (Fokko Driesprong, 
reviewed by Denys Kuzmenko)

No new revisions were added by this update.

Summary of changes:
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[hive] branch master updated: HIVE-27295: Improve docker logging in AbstractExternalDB and DatabaseRule (#4268) (Laszlo Bodor reviewed by Stamatis Zampetakis)

2023-04-27 Thread abstractdog
This is an automated email from the ASF dual-hosted git repository.

abstractdog pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new d2ce078f2d8 HIVE-27295: Improve docker logging in AbstractExternalDB 
and DatabaseRule (#4268) (Laszlo Bodor reviewed by Stamatis Zampetakis)
d2ce078f2d8 is described below

commit d2ce078f2d8584f39fbf8329c88c544f019464f8
Author: Bodor Laszlo 
AuthorDate: Thu Apr 27 09:08:09 2023 +0200

HIVE-27295: Improve docker logging in AbstractExternalDB and DatabaseRule 
(#4268) (Laszlo Bodor reviewed by Stamatis Zampetakis)
---
 .../hive/ql/externalDB/AbstractExternalDB.java | 39 ++
 .../metastore/dbinstall/rules/DatabaseRule.java| 34 +--
 2 files changed, 48 insertions(+), 25 deletions(-)

diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/externalDB/AbstractExternalDB.java
 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/externalDB/AbstractExternalDB.java
index f328bfc4bc6..48da3344277 100644
--- 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/externalDB/AbstractExternalDB.java
+++ 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/externalDB/AbstractExternalDB.java
@@ -83,14 +83,12 @@ public abstract class AbstractExternalDB {
 return new String[] { "docker", "logs", getDockerContainerName() };
 }
 
-
 private ProcessResults runCmd(String[] cmd, long secondsToWait)
 throws IOException, InterruptedException {
 LOG.info("Going to run: " + String.join(" ", cmd));
 Process proc = Runtime.getRuntime().exec(cmd);
-if (!proc.waitFor(secondsToWait, TimeUnit.SECONDS)) {
-throw new RuntimeException(
-"Process " + cmd[0] + " failed to run in " + secondsToWait 
+ " seconds");
+if (!proc.waitFor(Math.abs(secondsToWait), TimeUnit.SECONDS)) {
+  throw new RuntimeException("Process " + cmd[0] + " failed to run in 
" + secondsToWait + " seconds");
 }
 BufferedReader reader = new BufferedReader(new 
InputStreamReader(proc.getInputStream()));
 final StringBuilder lines = new StringBuilder();
@@ -99,41 +97,54 @@ public abstract class AbstractExternalDB {
 reader = new BufferedReader(new 
InputStreamReader(proc.getErrorStream()));
 final StringBuilder errLines = new StringBuilder();
 reader.lines().forEach(s -> errLines.append(s).append('\n'));
-LOG.info("Result size: " + lines.length() + ";" + errLines.length());
+LOG.info("Result lines#: {}(stdout);{}(stderr)",lines.length(), 
errLines.length());
 return new ProcessResults(lines.toString(), errLines.toString(), 
proc.exitValue());
 }
 
-private int runCmdAndPrintStreams(String[] cmd, long secondsToWait)
+private ProcessResults runCmdAndPrintStreams(String[] cmd, long 
secondsToWait)
 throws InterruptedException, IOException {
 ProcessResults results = runCmd(cmd, secondsToWait);
 LOG.info("Stdout from proc: " + results.stdout);
 LOG.info("Stderr from proc: " + results.stderr);
-return results.rc;
+return results;
 }
 
 
 public void launchDockerContainer() throws Exception {
 runCmdAndPrintStreams(buildRmCmd(), 600);
-if (runCmdAndPrintStreams(buildRunCmd(), 600) != 0) {
-throw new RuntimeException("Unable to start docker container");
+if (runCmdAndPrintStreams(buildRunCmd(), 600).rc != 0) {
+  printDockerEvents();
+  throw new RuntimeException("Unable to start docker container");
 }
 long startTime = System.currentTimeMillis();
 ProcessResults pr;
 do {
 Thread.sleep(1000);
-pr = runCmd(buildLogCmd(), 30);
+pr = runCmdAndPrintStreams(buildLogCmd(), 30);
 if (pr.rc != 0) {
-throw new RuntimeException("Failed to get docker logs");
+  printDockerEvents();
+  throw new RuntimeException("Failed to get docker logs");
 }
 } while (startTime + MAX_STARTUP_WAIT >= System.currentTimeMillis() && 
!isContainerReady(pr));
 if (startTime + MAX_STARTUP_WAIT < System.currentTimeMillis()) {
-throw new RuntimeException("Container failed to be ready in " + 
MAX_STARTUP_WAIT/1000 +
-" seconds");
+  printDockerEvents();
+  throw new RuntimeException(
+  String.format("Container initialization failed within %d 
seconds. Please check the hive logs.",
+  MAX_STARTUP_WAIT / 1000));
 }
+  }
+
+protected void printDockerEvents() {
+  try {
+runCmdAndPrintStreams(new String[] { "docker", "events", "--since", 
"24h", "--until", "0s" }, 3);
+  } catch (Exception e) {
+LOG.warn("A problem was encountered while attempting to retrieve