[hive] branch master updated: Revert "HIVE-23689: Bump Tez version to 0.9.2 (#1108)" (#1148)

2020-06-19 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 7cdf362  Revert "HIVE-23689: Bump Tez version to 0.9.2 (#1108)" (#1148)
7cdf362 is described below

commit 7cdf362f9f1a53025b371a8d5122336ded71d692
Author: Zoltan Haindrich 
AuthorDate: Fri Jun 19 08:59:30 2020 +0200

Revert "HIVE-23689: Bump Tez version to 0.9.2 (#1108)" (#1148)

This reverts commit 10c658419ba23dbfbf282b313dc668e48b1f1c77.
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index bb93b52..2a0c328 100644
--- a/pom.xml
+++ b/pom.xml
@@ -194,7 +194,7 @@
 1.7.30
 4.0.4
 2.7.0-SNAPSHOT
-0.9.2
+0.9.1
 2.2.0
 2.4.5
 2.12



[hive] branch master updated (9ec1470 -> e3f2dfd)

2020-06-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 9ec1470  HIVE-23585: Retrieve replication instance metrics details 
(Aasha Medhi, reviewed by Pravin Kumar Sinha)
 add e3f2dfd  HIVE-23627: Review of GroupByOperator (#1067)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hive/ql/exec/GroupByOperator.java   | 98 ++
 1 file changed, 25 insertions(+), 73 deletions(-)



[hive] branch master updated: HIVE-23711: Some IDE generated files should not be checked for license header by rat plugin (#1136)

2020-06-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c5edfd  HIVE-23711: Some IDE generated files should not be checked 
for license header by rat plugin (#1136)
9c5edfd is described below

commit 9c5edfda8349f5868ed926a03aa5d9abbdb0c8df
Author: Bodor Laszlo 
AuthorDate: Wed Jun 17 14:46:21 2020 +0200

HIVE-23711: Some IDE generated files should not be checked for license 
header by rat plugin (#1136)
---
 pom.xml | 4 
 1 file changed, 4 insertions(+)

diff --git a/pom.xml b/pom.xml
index 062ffe8..44fff7d 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1451,6 +1451,10 @@
   **/*.iml
**/*.txt
**/*.log
+   **/.factorypath
+   **/.classpath
+   **/.project
+   **/.settings/**
**/*.arcconfig
**/package-info.java
**/*.properties



[hive] branch master updated: HIVE-23689: Bump Tez version to 0.9.2 (#1108)

2020-06-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 10c6584  HIVE-23689: Bump Tez version to 0.9.2 (#1108)
10c6584 is described below

commit 10c658419ba23dbfbf282b313dc668e48b1f1c77
Author: Jagat Singh 
AuthorDate: Wed Jun 17 22:42:55 2020 +1000

HIVE-23689: Bump Tez version to 0.9.2 (#1108)
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index c6a5417..062ffe8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -193,7 +193,7 @@
 1.7.30
 4.0.4
 2.7.0-SNAPSHOT
-0.9.1
+0.9.2
 2.2.0
 2.4.5
 2.12



[hive] branch master updated: disable flaky tests

2020-06-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 1730b9d  disable flaky tests
1730b9d is described below

commit 1730b9da5ac22bed2951a9fddbd5f529694092e1
Author: Zoltan Haindrich 
AuthorDate: Wed Jun 17 07:36:08 2020 +

disable flaky tests
---
 .../test/java/org/apache/hive/jdbc/TestNewGetSplitsFormatReturnPath.java | 1 +
 ql/src/test/queries/clientpositive/jdbc_handler.q| 1 +
 2 files changed, 2 insertions(+)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestNewGetSplitsFormatReturnPath.java
 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestNewGetSplitsFormatReturnPath.java
index 83abffb..398362a 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestNewGetSplitsFormatReturnPath.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestNewGetSplitsFormatReturnPath.java
@@ -26,6 +26,7 @@ import org.junit.Test;
 /**
  * TestNewGetSplitsFormatReturnPath.
  */
+@Ignore("flaky HIVE-23524")
 public class TestNewGetSplitsFormatReturnPath extends TestNewGetSplitsFormat {
 
   @BeforeClass public static void beforeTest() throws Exception {
diff --git a/ql/src/test/queries/clientpositive/jdbc_handler.q 
b/ql/src/test/queries/clientpositive/jdbc_handler.q
index f2eba04..55de3bd 100644
--- a/ql/src/test/queries/clientpositive/jdbc_handler.q
+++ b/ql/src/test/queries/clientpositive/jdbc_handler.q
@@ -1,4 +1,5 @@
 --! qt:dataset:src
+--! qt:disabled:flaky HIVE-23709
 
 set hive.strict.checks.cartesian.product= false;
 



[hive] branch master updated (50f8765 -> 6569d58)

2020-06-15 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 50f8765  HIVE-23495: AcidUtils.getAcidState cleanup (Peter Varga, 
reviewed by Karen Coppage and Marta Kuczora)
 add 6569d58  HIVE-19545: Enable TestCliDriver#fouter_join_ppr.q (#1094)

No new revisions were added by this update.

Summary of changes:
 .../test/queries/clientpositive/fouter_join_ppr.q  |  1 -
 ...louter_join_ppr.q.out => fouter_join_ppr.q.out} | 32 +++---
 2 files changed, 16 insertions(+), 17 deletions(-)
 copy ql/src/test/results/clientpositive/llap/{louter_join_ppr.q.out => 
fouter_join_ppr.q.out} (99%)



[hive] 03/05: HIVE-23678: Don't enforce ASF license headers on target files (Karen Coppage reviewed by Panagiotis Garefalakis, Zoltan Haindrich)

2020-06-15 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 1e11e354a48891956be1fee01e0dafaecbcc642e
Author: Karen Coppage 
AuthorDate: Mon Jun 15 08:03:59 2020 +

HIVE-23678: Don't enforce ASF license headers on target files (Karen 
Coppage reviewed by Panagiotis Garefalakis, Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 

Closes apache/hive#1098
---
 pom.xml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/pom.xml b/pom.xml
index 2a31dbd..c6a5417 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1459,6 +1459,7 @@
**/*.q.out_*
**/*.xml
**/gen/**
+   **/target/**
**/scripts/**
**/resources/**
**/*.rc



[hive] 01/05: HIVE-23677: RetryTest is unstable (Aasha Medhi reviewed by David Mollitor)

2020-06-15 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 242487cecb0973b57f9ce0b138e66c19c60a4627
Author: Aasha Medhi 
AuthorDate: Mon Jun 15 07:57:23 2020 +

HIVE-23677: RetryTest is unstable (Aasha Medhi reviewed by David Mollitor)

Signed-off-by: Zoltan Haindrich 

Closes apache/hive#1099
---
 .../test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java   | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git 
a/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java
 
b/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java
index dc60092..109c9e3 100644
--- 
a/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java
+++ 
b/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java
@@ -20,12 +20,10 @@ package org.apache.hadoop.hive.metastore.utils;
 
 import org.junit.Assert;
 import org.junit.Test;
-import org.junit.Ignore;
 
 /**
  * Tests for retriable interface.
  */
-@Ignore("unstable HIVE-23677")
 public class RetryTest {
   @Test
   public void testRetrySuccess() {
@@ -118,7 +116,7 @@ public class RetryTest {
   Assert.fail();
 } catch (Exception e) {
   Assert.assertEquals(NullPointerException.class, e.getClass());
-  Assert.assertTrue(System.currentTimeMillis() - startTime > 180 * 1000);
+  Assert.assertTrue(System.currentTimeMillis() - startTime >= 180 * 1000);
 }
   }
 }



[hive] 02/05: HIVE-23687: Fix Spotbugs issues in hive-standalone-metastore-common (Mustafa Iman via Zoltan Haindrich)

2020-06-15 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 8fac12e718375c351135176324c9b388da1bd71b
Author: Mustafa Iman 
AuthorDate: Mon Jun 15 08:01:14 2020 +

HIVE-23687: Fix Spotbugs issues in hive-standalone-metastore-common 
(Mustafa Iman via Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 

Closes apache/hive#1107
---
 Jenkinsfile|  3 +-
 .../TestRemoteHiveMetaStoreDualAuthKerb.java   |  2 +-
 standalone-metastore/metastore-common/pom.xml  |  4 +-
 .../apache/hadoop/hive/metastore/ColumnType.java   | 19 +++---
 .../hadoop/hive/metastore/HiveMetaStoreClient.java | 16 ++---
 ...taStoreAnonymousAuthenticationProviderImpl.java |  2 +-
 .../MetaStoreConfigAuthenticationProviderImpl.java |  2 +-
 .../MetaStoreCustomAuthenticationProviderImpl.java |  4 +-
 .../MetaStoreLdapAuthenticationProviderImpl.java   |  6 +-
 .../MetaStorePasswdAuthenticationProvider.java |  2 +-
 .../hive/metastore/MetaStorePlainSaslHelper.java   |  2 +-
 .../hadoop/hive/metastore/ReplChangeManager.java   |  3 +-
 .../apache/hadoop/hive/metastore/Warehouse.java| 10 +--
 .../hadoop/hive/metastore/conf/MetastoreConf.java  | 79 +++---
 .../partition/spec/PartitionSpecProxy.java |  3 +-
 .../metastore/security/HadoopThriftAuthBridge.java |  7 +-
 .../security/HadoopThriftAuthBridge23.java |  4 +-
 .../hadoop/hive/metastore/utils/FileUtils.java |  2 +-
 .../hadoop/hive/metastore/utils/HdfsUtils.java |  5 +-
 .../hive/metastore/utils/MetaStoreUtils.java   | 27 
 .../apache/hadoop/hive/metastore/utils/Retry.java  |  2 +-
 ...estMetaStoreLdapAuthenticationProviderImpl.java | 28 
 .../TestRemoteHiveMetaStoreCustomAuth.java |  2 +-
 .../hive/metastore/conf/TestMetastoreConf.java |  4 +-
 .../metastore/ldap/LdapAuthenticationTestCase.java |  4 +-
 standalone-metastore/pom.xml   | 52 ++
 standalone-metastore/spotbugs/spotbugs-exclude.xml |  6 ++
 27 files changed, 184 insertions(+), 116 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index fceddb1..ad4c95b 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -171,7 +171,8 @@ jobWrappers {
   stage('Prechecks') {
 def spotbugsProjects = [
 ":hive-shims",
-":hive-storage-api"
+":hive-storage-api",
+":hive-standalone-metastore-common"
 ]
 buildHive("-Pspotbugs -pl " + spotbugsProjects.join(",") + " -am 
compile com.github.spotbugs:spotbugs-maven-plugin:4.0.0:check")
   }
diff --git 
a/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestRemoteHiveMetaStoreDualAuthKerb.java
 
b/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestRemoteHiveMetaStoreDualAuthKerb.java
index 73620f2..d4d9002 100644
--- 
a/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestRemoteHiveMetaStoreDualAuthKerb.java
+++ 
b/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestRemoteHiveMetaStoreDualAuthKerb.java
@@ -70,7 +70,7 @@ public class TestRemoteHiveMetaStoreDualAuthKerb extends 
RemoteHiveMetaStoreDual
 }
 
 @Override
-public void Authenticate(String user, String password) throws 
AuthenticationException {
+public void authenticate(String user, String password) throws 
AuthenticationException {
 
   if(!userMap.containsKey(user)) {
 throw new AuthenticationException("Invalid user : "+user);
diff --git a/standalone-metastore/metastore-common/pom.xml 
b/standalone-metastore/metastore-common/pom.xml
index a535737..521e92b 100644
--- a/standalone-metastore/metastore-common/pom.xml
+++ b/standalone-metastore/metastore-common/pom.xml
@@ -385,7 +385,7 @@
   true
   2048
   -Djava.awt.headless=true -Xmx2048m -Xms512m
-  
${basedir}/spotbugs/spotbugs-exclude.xml
+  
${basedir}/${standalone.metastore.path.to.root}/spotbugs/spotbugs-exclude.xml
 
   
 
@@ -400,7 +400,7 @@
   true
   2048
   -Djava.awt.headless=true -Xmx2048m -Xms512m
-  
${basedir}/spotbugs/spotbugs-exclude.xml
+  
${basedir}/${standalone.metastore.path.to.root}/spotbugs/spotbugs-exclude.xml
 
   
 
diff --git 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/ColumnType.java
 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/ColumnType.java
index bcce1f1..7327391 100644
--- 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/ColumnType.java
+++ 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/ColumnType

[hive] 04/05: HIVE-23633: Metastore some JDO query objects do not close properly (Zhihua Deng via Zoltan Haindrich)

2020-06-15 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 2a38a4317505058ee21812741036a57e5fe435c8
Author: Zhihua Deng 
AuthorDate: Mon Jun 15 08:09:24 2020 +

HIVE-23633: Metastore some JDO query objects do not close properly (Zhihua 
Deng via Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 

Closes apache/hive#1071
---
 .../hadoop/hive/metastore/MetaStoreDirectSql.java  | 30 -
 .../apache/hadoop/hive/metastore/ObjectStore.java  | 38 ++
 2 files changed, 45 insertions(+), 23 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
index a0021f6..2f9150d 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
@@ -286,10 +286,9 @@ class MetaStoreDirectSql {
   initQueries.add(pm.newQuery(MCreationMetadata.class, "dbName == ''"));
   initQueries.add(pm.newQuery(MPartitionPrivilege.class, "principalName == 
''"));
   initQueries.add(pm.newQuery(MPartitionColumnPrivilege.class, 
"principalName == ''"));
-  Query q;
-  while ((q = initQueries.peekFirst()) != null) {
+
+  for (Query q : initQueries) {
 q.execute();
-initQueries.pollFirst();
   }
 
   return true;
@@ -472,8 +471,11 @@ class MetaStoreDirectSql {
 }
 
 Query queryParams = pm.newQuery("javax.jdo.query.SQL", queryText);
-return executeWithArray(
+List tableNames = executeWithArray(
 queryParams, pms.toArray(), queryText, limit);
+List results = new ArrayList(tableNames);
+queryParams.closeAll();
+return results;
   }
 
   /**
@@ -493,8 +495,11 @@ class MetaStoreDirectSql {
 pms.add(TableType.MATERIALIZED_VIEW.toString());
 
 Query queryParams = pm.newQuery("javax.jdo.query.SQL", queryText);
-return executeWithArray(
+List mvs = executeWithArray(
 queryParams, pms.toArray(), queryText);
+List results = new ArrayList(mvs);
+queryParams.closeAll();
+return results;
   }
 
   /**
@@ -1129,6 +1134,7 @@ class MetaStoreDirectSql {
 int sqlResult = 
MetastoreDirectSqlUtils.extractSqlInt(query.executeWithArray(params));
 long queryTime = doTrace ? System.nanoTime() : 0;
 MetastoreDirectSqlUtils.timingTrace(doTrace, queryText, start, queryTime);
+query.closeAll();
 return sqlResult;
   }
 
@@ -2225,7 +2231,7 @@ class MetaStoreDirectSql {
 }
 
 Query queryParams = pm.newQuery("javax.jdo.query.SQL", queryText);
-  List sqlResult = 
MetastoreDirectSqlUtils.ensureList(executeWithArray(
+List sqlResult = 
MetastoreDirectSqlUtils.ensureList(executeWithArray(
 queryParams, pms.toArray(), queryText));
 
 if (!sqlResult.isEmpty()) {
@@ -2254,6 +2260,7 @@ class MetaStoreDirectSql {
 ret.add(currKey);
   }
 }
+queryParams.closeAll();
 return ret;
   }
 
@@ -2292,7 +2299,7 @@ class MetaStoreDirectSql {
 }
 
 Query queryParams = pm.newQuery("javax.jdo.query.SQL", queryText);
-  List sqlResult = 
MetastoreDirectSqlUtils.ensureList(executeWithArray(
+List sqlResult = 
MetastoreDirectSqlUtils.ensureList(executeWithArray(
 queryParams, pms.toArray(), queryText));
 
 if (!sqlResult.isEmpty()) {
@@ -2313,6 +2320,7 @@ class MetaStoreDirectSql {
 ret.add(currKey);
   }
 }
+queryParams.closeAll();
 return ret;
   }
 
@@ -2350,7 +2358,7 @@ class MetaStoreDirectSql {
 }
 
 Query queryParams = pm.newQuery("javax.jdo.query.SQL", queryText);
-  List sqlResult = 
MetastoreDirectSqlUtils.ensureList(executeWithArray(
+List sqlResult = 
MetastoreDirectSqlUtils.ensureList(executeWithArray(
 queryParams, pms.toArray(), queryText));
 
 if (!sqlResult.isEmpty()) {
@@ -2370,6 +2378,7 @@ class MetaStoreDirectSql {
 rely));
   }
 }
+queryParams.closeAll();
 return ret;
   }
 
@@ -2407,7 +2416,7 @@ class MetaStoreDirectSql {
 }
 
 Query queryParams = pm.newQuery("javax.jdo.query.SQL", queryText);
-  List sqlResult = 
MetastoreDirectSqlUtils.ensureList(executeWithArray(
+List sqlResult = 
MetastoreDirectSqlUtils.ensureList(executeWithArray(
 queryParams, pms.toArray(), queryText));
 
 if (!sqlResult.isEmpty()) {
@@ -2427,6 +2436,7 @@ class MetaStoreDirectSql {
 rely));
   }
 }
+queryParams.closeAll();
 return ret;
   }
 
@@ -2490,6 +2500,7 @@ class MetaStoreDirectSql {
 r

[hive] branch master updated (714683f -> 130f804)

2020-06-15 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 714683f  HIVE-23499: REPL: Immutable repl dumps should be reusable 
across mult… (#1092)
 new 242487c  HIVE-23677: RetryTest is unstable (Aasha Medhi reviewed by 
David Mollitor)
 new 8fac12e  HIVE-23687: Fix Spotbugs issues in 
hive-standalone-metastore-common (Mustafa Iman via Zoltan Haindrich)
 new 1e11e35  HIVE-23678: Don't enforce ASF license headers on target files 
(Karen Coppage reviewed by Panagiotis Garefalakis, Zoltan Haindrich)
 new 2a38a43  HIVE-23633: Metastore some JDO query objects do not close 
properly (Zhihua Deng via Zoltan Haindrich)
 new 130f804  disable flaky tests

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 Jenkinsfile|  3 +-
 .../metrics/metrics2/TestCodahaleMetrics.java  |  1 +
 .../TestRemoteHiveMetaStoreDualAuthKerb.java   |  2 +-
 .../hadoop/hive/kafka/HiveKafkaProducerTest.java   |  1 +
 pom.xml|  1 +
 .../clientnegative/external_jdbc_negative.q|  1 +
 .../queries/clientpositive/druidkafkamini_basic.q  |  2 +
 .../queries/clientpositive/schq_materialized.q |  1 +
 standalone-metastore/metastore-common/pom.xml  |  4 +-
 .../apache/hadoop/hive/metastore/ColumnType.java   | 19 +++---
 .../hadoop/hive/metastore/HiveMetaStoreClient.java | 16 ++---
 ...taStoreAnonymousAuthenticationProviderImpl.java |  2 +-
 .../MetaStoreConfigAuthenticationProviderImpl.java |  2 +-
 .../MetaStoreCustomAuthenticationProviderImpl.java |  4 +-
 .../MetaStoreLdapAuthenticationProviderImpl.java   |  6 +-
 .../MetaStorePasswdAuthenticationProvider.java |  2 +-
 .../hive/metastore/MetaStorePlainSaslHelper.java   |  2 +-
 .../hadoop/hive/metastore/ReplChangeManager.java   |  3 +-
 .../apache/hadoop/hive/metastore/Warehouse.java| 10 +--
 .../hadoop/hive/metastore/conf/MetastoreConf.java  | 79 +++---
 .../partition/spec/PartitionSpecProxy.java |  3 +-
 .../metastore/security/HadoopThriftAuthBridge.java |  7 +-
 .../security/HadoopThriftAuthBridge23.java |  4 +-
 .../hadoop/hive/metastore/utils/FileUtils.java |  2 +-
 .../hadoop/hive/metastore/utils/HdfsUtils.java |  5 +-
 .../hive/metastore/utils/MetaStoreUtils.java   | 27 
 .../apache/hadoop/hive/metastore/utils/Retry.java  |  2 +-
 .../hadoop/hive/metastore/utils/RetryTest.java |  4 +-
 .../hadoop/hive/metastore/MetaStoreDirectSql.java  | 30 +---
 .../apache/hadoop/hive/metastore/ObjectStore.java  | 38 +++
 ...estMetaStoreLdapAuthenticationProviderImpl.java | 28 
 .../TestRemoteHiveMetaStoreCustomAuth.java |  2 +-
 .../hive/metastore/conf/TestMetastoreConf.java |  4 +-
 .../metastore/ldap/LdapAuthenticationTestCase.java |  4 +-
 standalone-metastore/pom.xml   | 52 ++
 standalone-metastore/spotbugs/spotbugs-exclude.xml |  6 ++
 36 files changed, 237 insertions(+), 142 deletions(-)



[hive] 05/05: disable flaky tests

2020-06-15 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 130f80445d589cdd82904cea1073c84d1368d079
Author: Zoltan Haindrich 
AuthorDate: Mon Jun 15 08:44:47 2020 +

disable flaky tests
---
 .../apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java | 1 +
 .../src/test/org/apache/hadoop/hive/kafka/HiveKafkaProducerTest.java| 1 +
 ql/src/test/queries/clientnegative/external_jdbc_negative.q | 1 +
 ql/src/test/queries/clientpositive/druidkafkamini_basic.q   | 2 ++
 ql/src/test/queries/clientpositive/schq_materialized.q  | 1 +
 5 files changed, 6 insertions(+)

diff --git 
a/common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
 
b/common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
index 85ded7e..072ae8a 100644
--- 
a/common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
+++ 
b/common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
@@ -151,6 +151,7 @@ public class TestCodahaleMetrics {
* @throws Exception if fails to read counter value
*/
   @Test
+  @org.junit.Ignore("flaky test HIVE-23692")
   public void testFileReporting() throws Exception {
 int runs = 5;
 String  counterName = "count2";
diff --git 
a/kafka-handler/src/test/org/apache/hadoop/hive/kafka/HiveKafkaProducerTest.java
 
b/kafka-handler/src/test/org/apache/hadoop/hive/kafka/HiveKafkaProducerTest.java
index 8c9ed5f..cf08e2f 100644
--- 
a/kafka-handler/src/test/org/apache/hadoop/hive/kafka/HiveKafkaProducerTest.java
+++ 
b/kafka-handler/src/test/org/apache/hadoop/hive/kafka/HiveKafkaProducerTest.java
@@ -53,6 +53,7 @@ import java.util.stream.IntStream;
 /**
  * Test class for Hive Kafka Producer.
  */
+@org.junit.Ignore("flaky HIVE-23693")
 @SuppressWarnings("unchecked") public class HiveKafkaProducerTest {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(HiveKafkaProducerTest.class);
diff --git a/ql/src/test/queries/clientnegative/external_jdbc_negative.q 
b/ql/src/test/queries/clientnegative/external_jdbc_negative.q
index 5937391..3e84bff 100644
--- a/ql/src/test/queries/clientnegative/external_jdbc_negative.q
+++ b/ql/src/test/queries/clientnegative/external_jdbc_negative.q
@@ -1,3 +1,4 @@
+--! qt:disabled:test is unstable HIVE-23690
 --! qt:dataset:src
 
 CREATE TEMPORARY FUNCTION dboutput AS 
'org.apache.hadoop.hive.contrib.genericudf.example.GenericUDFDBOutput';
diff --git a/ql/src/test/queries/clientpositive/druidkafkamini_basic.q 
b/ql/src/test/queries/clientpositive/druidkafkamini_basic.q
index 1e999df..bb75549 100644
--- a/ql/src/test/queries/clientpositive/druidkafkamini_basic.q
+++ b/ql/src/test/queries/clientpositive/druidkafkamini_basic.q
@@ -1,3 +1,5 @@
+--! qt:disabled:unstable resultset HIVE-23694
+
 SET hive.vectorized.execution.enabled=true ;
 CREATE EXTERNAL TABLE druid_kafka_test(`__time` timestamp, page string, `user` 
string, language string, added int, deleted int)
 STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
diff --git a/ql/src/test/queries/clientpositive/schq_materialized.q 
b/ql/src/test/queries/clientpositive/schq_materialized.q
index f629bdf..95e9e4b 100644
--- a/ql/src/test/queries/clientpositive/schq_materialized.q
+++ b/ql/src/test/queries/clientpositive/schq_materialized.q
@@ -1,3 +1,4 @@
+--! qt:disabled:flaky HIVE-23691
 --! qt:authorizer
 --! qt:scheduledqueryservice
 --! qt:transactional



[hive] branch master updated: HIVE-23686: Fix spotbugs issues in hive-shims (#1104)

2020-06-13 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ec6c1bf  HIVE-23686: Fix spotbugs issues in hive-shims (#1104)
ec6c1bf is described below

commit ec6c1bff84527c5f70d23b954d1f30d8ffa7a1d2
Author: Mustafa İman 
AuthorDate: Sat Jun 13 04:01:19 2020 -0700

HIVE-23686: Fix spotbugs issues in hive-shims (#1104)
---
 Jenkinsfile   |  3 +--
 pom.xml   |  9 -
 .../java/org/apache/hadoop/hive/shims/Hadoop23Shims.java  | 15 +--
 3 files changed, 18 insertions(+), 9 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 1d82b11..fceddb1 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -170,8 +170,7 @@ jobWrappers {
   }
   stage('Prechecks') {
 def spotbugsProjects = [
-":hive-shims-aggregator",
-":hive-shims-common",
+":hive-shims",
 ":hive-storage-api"
 ]
 buildHive("-Pspotbugs -pl " + spotbugsProjects.join(",") + " -am 
compile com.github.spotbugs:spotbugs-maven-plugin:4.0.0:check")
diff --git a/pom.xml b/pom.xml
index 3536533..eaadad0 100644
--- a/pom.xml
+++ b/pom.xml
@@ -216,6 +216,7 @@
 2.4.0
 3.0.11
 1.0.0-incubating
+4.0.3
   
 
   
@@ -1107,6 +1108,12 @@
 
   
 
+
+  com.github.spotbugs
+  spotbugs-annotations
+  ${spotbugs.version}
+  provided
+
   
 
   
@@ -1617,7 +1624,7 @@
   
 com.github.spotbugs
 spotbugs
-4.0.3
+${spotbugs.version}
   
 
 
diff --git 
a/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java 
b/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java
index 2eafef0..acdc5f6 100644
--- a/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java
+++ b/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java
@@ -39,6 +39,7 @@ import java.util.Set;
 import java.util.TreeMap;
 import javax.security.auth.Subject;
 
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.crypto.CipherSuite;
@@ -618,7 +619,7 @@ public class Hadoop23Shims extends HadoopShimsSecure {
* MiniDFSShim.
*
*/
-  public class MiniDFSShim implements HadoopShims.MiniDFSShim {
+  public static class MiniDFSShim implements HadoopShims.MiniDFSShim {
 private final MiniDFSCluster cluster;
 
 public MiniDFSShim(MiniDFSCluster cluster) {
@@ -643,7 +644,8 @@ public class Hadoop23Shims extends HadoopShimsSecure {
 }
 return hcatShimInstance;
   }
-  private final class HCatHadoopShims23 implements HCatHadoopShims {
+
+  private static final class HCatHadoopShims23 implements HCatHadoopShims {
 @Override
 public TaskID createTaskID() {
   return new TaskID("", 0, TaskType.MAP, 0);
@@ -827,7 +829,7 @@ public class Hadoop23Shims extends HadoopShimsSecure {
 stream.hflush();
   }
 
-  class ProxyFileSystem23 extends ProxyFileSystem {
+  static class ProxyFileSystem23 extends ProxyFileSystem {
 public ProxyFileSystem23(FileSystem fs) {
   super(fs);
 }
@@ -1029,7 +1031,7 @@ public class Hadoop23Shims extends HadoopShimsSecure {
   /**
* Shim for KerberosName
*/
-  public class KerberosNameShim implements HadoopShimsSecure.KerberosNameShim {
+  public static class KerberosNameShim implements 
HadoopShimsSecure.KerberosNameShim {
 
 private final KerberosName kerberosName;
 
@@ -1187,6 +1189,7 @@ public class Hadoop23Shims extends HadoopShimsSecure {
 
   private static Boolean hdfsEncryptionSupport;
 
+  @SuppressFBWarnings(value = "LI_LAZY_INIT_STATIC", justification = "All 
threads set the same value despite data race")
   public static boolean isHdfsEncryptionSupported() {
 if (hdfsEncryptionSupport == null) {
   Method m = null;
@@ -1204,8 +1207,8 @@ public class Hadoop23Shims extends HadoopShimsSecure {
 return hdfsEncryptionSupport;
   }
 
-  public class HdfsEncryptionShim implements HadoopShims.HdfsEncryptionShim {
-private final String HDFS_SECURITY_DEFAULT_CIPHER = "AES/CTR/NoPadding";
+  public static class HdfsEncryptionShim implements 
HadoopShims.HdfsEncryptionShim {
+private static final String HDFS_SECURITY_DEFAULT_CIPHER = 
"AES/CTR/NoPadding";
 
 /**
  * Gets information about HDFS encryption zones



[hive] branch master updated: HIVE-23620: Moving to SpotBugs that is actively maintained (#1066)

2020-06-12 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 5120f4f  HIVE-23620: Moving to SpotBugs that is actively maintained 
(#1066)
5120f4f is described below

commit 5120f4fbc74cfa31194064381d16596d4e440741
Author: Panagiotis Garefalakis 
AuthorDate: Fri Jun 12 14:30:54 2020 +0100

HIVE-23620: Moving to SpotBugs that is actively maintained (#1066)
---
 Jenkinsfile|  4 ++--
 pom.xml| 27 ++
 ql/pom.xml |  7 +++---
 .../spotbugs-exclude.xml   |  0
 standalone-metastore/metastore-common/pom.xml  | 26 +
 .../spotbugs/spotbugs-exclude.xml} |  0
 standalone-metastore/metastore-server/pom.xml  | 26 +
 .../spotbugs/spotbugs-exclude.xml} |  0
 .../spotbugs-exclude.xml}  |  0
 9 files changed, 58 insertions(+), 32 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 8c18733..1d82b11 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -169,12 +169,12 @@ jobWrappers {
 checkout scm
   }
   stage('Prechecks') {
-def findbugsProjects = [
+def spotbugsProjects = [
 ":hive-shims-aggregator",
 ":hive-shims-common",
 ":hive-storage-api"
 ]
-buildHive("-Pfindbugs -pl " + findbugsProjects.join(",") + " -am 
compile findbugs:check")
+buildHive("-Pspotbugs -pl " + spotbugsProjects.join(",") + " -am 
compile com.github.spotbugs:spotbugs-maven-plugin:4.0.0:check")
   }
   stage('Compile') {
 buildHive("install -Dtest=noMatches")
diff --git a/pom.xml b/pom.xml
index 0529102..3536533 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1604,18 +1604,27 @@
 
 
 
-findbugs
+spotbugs
   
 
+
   
-org.codehaus.mojo
-findbugs-maven-plugin
-3.0.5
+com.github.spotbugs
+spotbugs-maven-plugin
+4.0.0
+
+  
+  
+com.github.spotbugs
+spotbugs
+4.0.3
+  
+
 
   true
   2048
   -Djava.awt.headless=true -Xmx2048m -Xms512m
-  
${basedir}/${hive.path.to.root}/findbugs/findbugs-exclude.xml
+  
${basedir}/${hive.path.to.root}/spotbugs/spotbugs-exclude.xml
 
   
 
@@ -1623,14 +1632,14 @@
   
 
   
-org.codehaus.mojo
-findbugs-maven-plugin
-3.0.5
+com.github.spotbugs
+spotbugs-maven-plugin
+4.0.0
 
   true
   2048
   -Djava.awt.headless=true -Xmx2048m -Xms512m
-  
${basedir}/${hive.path.to.root}/findbugs/findbugs-exclude.xml
+  
${basedir}/${hive.path.to.root}/spotbugs/spotbugs-exclude.xml
 
   
 
diff --git a/ql/pom.xml b/ql/pom.xml
index 40b1f30..4f9cd3d 100644
--- a/ql/pom.xml
+++ b/ql/pom.xml
@@ -857,9 +857,10 @@
   ${jersey.version}
 
 
-  com.google.code.findbugs
-  findbugs-annotations
-  3.0.1
+  com.github.spotbugs
+  spotbugs-annotations
+  4.0.3
+  true
 
   
   
diff --git a/findbugs/findbugs-exclude.xml b/spotbugs/spotbugs-exclude.xml
similarity index 100%
rename from findbugs/findbugs-exclude.xml
rename to spotbugs/spotbugs-exclude.xml
diff --git a/standalone-metastore/metastore-common/pom.xml 
b/standalone-metastore/metastore-common/pom.xml
index 1938dce..a535737 100644
--- a/standalone-metastore/metastore-common/pom.xml
+++ b/standalone-metastore/metastore-common/pom.xml
@@ -366,18 +366,26 @@
   
 
 
-  findbugs
+  spotbugs
   
 
   
-org.codehaus.mojo
-findbugs-maven-plugin
-3.0.0
+com.github.spotbugs
+spotbugs-maven-plugin
+4.0.0
+
+  
+  
+com.github.spotbugs
+spotbugs
+4.0.3
+  
+
 
   true
   2048
   -Djava.awt.headless=true -Xmx2048m -Xms512m
-  
${basedir}/findbugs/findbugs-exclude.xml
+  
${basedir}/spotbugs/spotbugs-exclude.xml
 
   
 
@@ -385,14 +393,14 @@
   
 
   
-org.codehaus.mojo
-findbugs-maven-plugin
-3.0.0
+com.git

[hive] branch master updated: HIVE-23617: Fixing storage-api FindBug issues (#1063)

2020-06-12 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 3419daf  HIVE-23617: Fixing storage-api FindBug issues (#1063)
3419daf is described below

commit 3419dafd9159f5f2dd2333dd6e816480992954b6
Author: Panagiotis Garefalakis 
AuthorDate: Fri Jun 12 10:47:28 2020 +0100

HIVE-23617: Fixing storage-api FindBug issues (#1063)
---
 Jenkinsfile   |  3 ++-
 .../apache/hadoop/hive/common/ValidReadTxnList.java   |  4 
 .../hadoop/hive/common/ValidReaderWriteIdList.java|  5 +
 .../org/apache/hadoop/hive/common/io/DataCache.java   |  4 +++-
 .../apache/hadoop/hive/common/io/DiskRangeList.java   | 15 +++
 .../hive/common/io/encoded/EncodedColumnBatch.java| 17 +
 .../hadoop/hive/common/type/FastHiveDecimal.java  |  3 ++-
 .../hadoop/hive/common/type/FastHiveDecimalImpl.java  | 17 -
 .../hadoop/hive/common/type/HiveIntervalDayTime.java  |  2 ++
 .../hadoop/hive/common/type/RandomTypeUtil.java   | 12 
 .../hadoop/hive/ql/exec/vector/BytesColumnVector.java |  8 ++--
 .../hive/ql/exec/vector/TimestampColumnVector.java|  3 +++
 .../hive/ql/exec/vector/VectorizedRowBatch.java   |  5 +
 .../hadoop/hive/ql/io/sarg/SearchArgumentImpl.java|  1 -
 .../hadoop/hive/serde2/io/HiveDecimalWritable.java|  4 ++--
 .../hadoop/hive/serde2/io/HiveDecimalWritableV1.java  |  3 +++
 .../java/org/apache/hive/common/util/BloomFilter.java | 11 ++-
 .../org/apache/hive/common/util/BloomKFilter.java |  7 +--
 .../src/java/org/apache/hive/common/util/Murmur3.java |  4 
 .../apache/hive/common/util/SuppressFBWarnings.java   | 19 +++
 .../test/org/apache/hive/common/util/TestMurmur3.java | 19 ++-
 21 files changed, 121 insertions(+), 45 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index c7dbb05..8c18733 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -171,7 +171,8 @@ jobWrappers {
   stage('Prechecks') {
 def findbugsProjects = [
 ":hive-shims-aggregator",
-":hive-shims-common"
+":hive-shims-common",
+":hive-storage-api"
 ]
 buildHive("-Pfindbugs -pl " + findbugsProjects.join(",") + " -am 
compile findbugs:check")
   }
diff --git 
a/storage-api/src/java/org/apache/hadoop/hive/common/ValidReadTxnList.java 
b/storage-api/src/java/org/apache/hadoop/hive/common/ValidReadTxnList.java
index b8ff03f..9cfe60e 100644
--- a/storage-api/src/java/org/apache/hadoop/hive/common/ValidReadTxnList.java
+++ b/storage-api/src/java/org/apache/hadoop/hive/common/ValidReadTxnList.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.hive.common;
 
+import org.apache.hive.common.util.SuppressFBWarnings;
+
 import java.util.Arrays;
 import java.util.BitSet;
 
@@ -41,6 +43,7 @@ public class ValidReadTxnList implements ValidTxnList {
   /**
* Used if there are no open transactions in the snapshot
*/
+  @SuppressFBWarnings(value = "EI_EXPOSE_REP2", justification = "Ref external 
obj for efficiency")
   public ValidReadTxnList(long[] exceptions, BitSet abortedBits, long 
highWatermark, long minOpenTxn) {
 if (exceptions.length > 0) {
   this.minOpenTxn = minOpenTxn;
@@ -177,6 +180,7 @@ public class ValidReadTxnList implements ValidTxnList {
   }
 
   @Override
+  @SuppressFBWarnings(value = "EI_EXPOSE_REP", justification = "Expose 
internal rep for efficiency")
   public long[] getInvalidTransactions() {
 return exceptions;
   }
diff --git 
a/storage-api/src/java/org/apache/hadoop/hive/common/ValidReaderWriteIdList.java
 
b/storage-api/src/java/org/apache/hadoop/hive/common/ValidReaderWriteIdList.java
index bc8ac0d..4c2cf7c 100644
--- 
a/storage-api/src/java/org/apache/hadoop/hive/common/ValidReaderWriteIdList.java
+++ 
b/storage-api/src/java/org/apache/hadoop/hive/common/ValidReaderWriteIdList.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.hive.common;
 
+import org.apache.hive.common.util.SuppressFBWarnings;
+
 import java.util.Arrays;
 import java.util.BitSet;
 
@@ -51,6 +53,8 @@ public class ValidReaderWriteIdList implements 
ValidWriteIdList {
   public ValidReaderWriteIdList(String tableName, long[] exceptions, BitSet 
abortedBits, long highWatermark) {
 this(tableName, exceptions, abortedBits, highWatermark, Long.MAX_VALUE);
   }
+
+  @SuppressFBWarnings(value = "EI_EXPOSE_REP2", justification = "Ref external 
obj for efficiency")
   public ValidReaderWriteIdList(String tableName,
 long[] exceptions, BitSet abortedBits, long 
highWatermark, long minOpenWriteId) {
 this.tableName = tab

[hive] branch master updated: disable flaky tests

2020-06-12 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new b8f1be7  disable flaky tests
b8f1be7 is described below

commit b8f1be79c4b65dc897f019fc1fb5ca26a4ca00bb
Author: Zoltan Haindrich 
AuthorDate: Fri Jun 12 09:08:55 2020 +

disable flaky tests
---
 .../org/apache/hive/hcatalog/listener/TestDbNotificationListener.java| 1 +
 .../test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java  | 1 +
 .../src/test/java/org/apache/hive/spark/client/TestSparkClient.java  | 1 +
 .../test/java/org/apache/hadoop/hive/metastore/metrics/TestMetrics.java  | 1 +
 4 files changed, 4 insertions(+)

diff --git 
a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java
 
b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java
index b948727..3ee60a1 100644
--- 
a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java
+++ 
b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java
@@ -120,6 +120,7 @@ import org.junit.Ignore;
  * Tests DbNotificationListener when used as a transactional event listener
  * (hive.metastore.transactional.event.listeners)
  */
+@org.junit.Ignore("TestDbNotificationListener is unstable HIVE-23680")
 public class TestDbNotificationListener {
   private static final Logger LOG = 
LoggerFactory.getLogger(TestDbNotificationListener.class
   .getName());
diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java
 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java
index e722f74..097a7db 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestTriggersMoveWorkloadManager.java
@@ -51,6 +51,7 @@ import org.junit.runner.RunWith;
 
 import com.google.common.collect.Lists;
 
+@org.junit.Ignore("unstable test HIVE-23681")
 @RunWith(RetryTestRunner.class)
 public class TestTriggersMoveWorkloadManager extends AbstractJdbcTriggersTest {
   @Rule
diff --git 
a/spark-client/src/test/java/org/apache/hive/spark/client/TestSparkClient.java 
b/spark-client/src/test/java/org/apache/hive/spark/client/TestSparkClient.java
index 0e1557e..8302af7 100644
--- 
a/spark-client/src/test/java/org/apache/hive/spark/client/TestSparkClient.java
+++ 
b/spark-client/src/test/java/org/apache/hive/spark/client/TestSparkClient.java
@@ -64,6 +64,7 @@ import org.junit.Test;
 import static org.junit.Assert.*;
 import static org.mockito.Mockito.*;
 
+@org.junit.Ignore("unstable HIVE-23679")
 public class TestSparkClient {
 
   // Timeouts are bad... mmmkay.
diff --git 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/metrics/TestMetrics.java
 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/metrics/TestMetrics.java
index 29d051a..a866fac 100644
--- 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/metrics/TestMetrics.java
+++ 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/metrics/TestMetrics.java
@@ -38,6 +38,7 @@ import org.junit.Before;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
+@org.junit.Ignore("flaky HIVE-23682")
 @Category(MetastoreUnitTest.class)
 public class TestMetrics {
   private static final long REPORT_INTERVAL = 1;



[hive] branch master updated: HIVE-23269: Unsafe comparing bigints and strings (#992)

2020-06-12 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new cd7252c  HIVE-23269: Unsafe comparing bigints and strings (#992)
cd7252c is described below

commit cd7252c7175c6f82731e619b16e3371565aaaec5
Author: dengzh 
AuthorDate: Fri Jun 12 16:45:14 2020 +0800

HIVE-23269: Unsafe comparing bigints and strings (#992)
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |  2 +-
 .../hive/ql/parse/type/TypeCheckProcFactory.java   | 30 ++--
 .../ql/parse/type/TestBigIntCompareValidation.java | 79 ++
 3 files changed, 106 insertions(+), 5 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 24174ae..fce7fc3 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -1663,7 +1663,7 @@ public class HiveConf extends Configuration {
 "Note that this check currently does not consider data size, only the 
query pattern."),
 HIVE_STRICT_CHECKS_TYPE_SAFETY("hive.strict.checks.type.safety", true,
 "Enabling strict type safety checks disallows the following:\n" +
-"  Comparing bigints and strings.\n" +
+"  Comparing bigints and strings/(var)chars.\n" +
 "  Comparing bigints and doubles."),
 HIVE_STRICT_CHECKS_CARTESIAN("hive.strict.checks.cartesian.product", false,
 "Enabling strict Cartesian join checks disallows the following:\n" +
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/type/TypeCheckProcFactory.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/type/TypeCheckProcFactory.java
index e16966e..f4a805c 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/type/TypeCheckProcFactory.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/type/TypeCheckProcFactory.java
@@ -28,8 +28,10 @@ import java.util.HashSet;
 import java.util.LinkedHashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.Set;
 import java.util.Stack;
 
+import com.google.common.collect.Sets;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.conf.HiveConf;
@@ -776,6 +778,20 @@ public class TypeCheckProcFactory {
   return 
getDefaultExprProcessor().getFuncExprNodeDescWithUdfData(baseType, 
tableFieldTypeInfo, column);
 }
 
+private boolean unSafeCompareWithBigInt(TypeInfo otherTypeInfo, TypeInfo 
bigintCandidate) {
+  Set unsafeConventionTyps = 
Sets.newHashSet(
+  PrimitiveObjectInspector.PrimitiveCategory.STRING,
+  PrimitiveObjectInspector.PrimitiveCategory.VARCHAR,
+  PrimitiveObjectInspector.PrimitiveCategory.CHAR);
+
+  if (bigintCandidate.equals(TypeInfoFactory.longTypeInfo) && 
otherTypeInfo instanceof PrimitiveTypeInfo) {
+PrimitiveObjectInspector.PrimitiveCategory pCategory =
+((PrimitiveTypeInfo)otherTypeInfo).getPrimitiveCategory();
+return unsafeConventionTyps.contains(pCategory);
+  }
+  return false;
+}
+
 protected void validateUDF(ASTNode expr, boolean isFunction, TypeCheckCtx 
ctx, FunctionInfo fi,
 List children, GenericUDF genericUDF) throws SemanticException {
   // Check if a bigint is implicitely cast to a double as part of a 
comparison
@@ -790,11 +806,17 @@ public class TypeCheckProcFactory {
 LogHelper console = new LogHelper(LOG);
 
 // For now, if a bigint is going to be cast to a double throw an error 
or warning
-if ((oiTypeInfo0.equals(TypeInfoFactory.stringTypeInfo) && 
oiTypeInfo1.equals(TypeInfoFactory.longTypeInfo)) ||
-(oiTypeInfo0.equals(TypeInfoFactory.longTypeInfo) && 
oiTypeInfo1.equals(TypeInfoFactory.stringTypeInfo))) {
+if (unSafeCompareWithBigInt(oiTypeInfo0, oiTypeInfo1) || 
unSafeCompareWithBigInt(oiTypeInfo1, oiTypeInfo0)) {
   String error = StrictChecks.checkTypeSafety(conf);
-  if (error != null) throw new UDFArgumentException(error);
-  console.printError("WARNING: Comparing a bigint and a string may 
result in a loss of precision.");
+  if (error != null) {
+throw new UDFArgumentException(error);
+  }
+  // To  make the error output be consistency, get the other side type 
name that comparing with biginit.
+  String type = oiTypeInfo0.getTypeName();
+  if (!oiTypeInfo1.equals(TypeInfoFactory.longTypeInfo)) {
+type = oiTypeInfo1.getTypeName();
+  }
+  console.printError("WARNING: Comparing a bigint and a " + type + " 
may result in a loss of precisio

[hive] branch master updated: HIVE-21894: Hadoop credential password storage for the Kafka Storage handler when security is SSL (#839)

2020-06-11 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 4ead9d3  HIVE-21894: Hadoop credential password storage for the Kafka 
Storage handler when security is SSL (#839)
4ead9d3 is described below

commit 4ead9d35eadc997b65ceeb64f1fa33c71e47070d
Author: Justin Leet 
AuthorDate: Thu Jun 11 15:08:53 2020 -0400

HIVE-21894: Hadoop credential password storage for the Kafka Storage 
handler when security is SSL (#839)
---
 kafka-handler/README.md| 71 +---
 .../hadoop/hive/kafka/KafkaTableProperties.java| 38 ++-
 .../org/apache/hadoop/hive/kafka/KafkaUtils.java   | 78 +-
 .../apache/hadoop/hive/kafka/KafkaUtilsTest.java   | 23 +++
 .../kafka/kafka_storage_handler.q.out  | 36 ++
 5 files changed, 235 insertions(+), 11 deletions(-)

diff --git a/kafka-handler/README.md b/kafka-handler/README.md
index e7761e3..e02b5e9 100644
--- a/kafka-handler/README.md
+++ b/kafka-handler/README.md
@@ -216,15 +216,68 @@ GROUP BY
 
 ## Table Properties
 
-| Property| Description

| Mandatory | Default |
-|-||---|-|
-| kafka.topic | Kafka topic name to map the table to.  

| Yes   | null|
-| kafka.bootstrap.servers | Table property indicating Kafka 
broker(s) connection string.
   | Yes   | null|
-| kafka.serde.class   | Serializer and Deserializer class 
implementation. 
 | No| org.apache.hadoop.hive.serde2.JsonSerDe |
-| hive.kafka.poll.timeout.ms  | Parameter indicating Kafka Consumer 
poll timeout period in millis.  FYI this is independent from internal Kafka 
consumer timeouts. | No| 5000 (5 Seconds)|
-| hive.kafka.max.retries  | Number of retries for Kafka metadata 
fetch operations.   
  | No| 6   |
-| hive.kafka.metadata.poll.timeout.ms | Number of milliseconds before consumer 
timeout on fetching Kafka metadata. 
| No| 3 (30 Seconds)  |
-| kafka.write.semantic| Writer semantics, allowed values 
(AT_LEAST_ONCE, EXACTLY_ONCE)   
  | No| AT_LEAST_ONCE   |
+| Property   | Description 

   | Mandatory | Default |
+|--- 
||---|-|
+| kafka.topic| Kafka topic name to map the table 
to. 
 | Yes   | null|
+| kafka.bootstrap.servers| Table property indicating Kafka 
broker(s) connection string.
   | Yes   | null|
+| kafka.serde.class  | Serializer and Deserializer class 
implementation. 
 | No| org.apache.hadoop.hive.serde2.JsonSerDe |
+| hive.kafka.poll.timeout.ms | Parameter indicating Kafka Consumer 
poll timeout period in millis.  FYI this is independent from internal Kafka 
consumer timeouts. | No| 5000 (5 Seconds)|
+| hive.kafka.max.retries | Number of retries for Kafka 
metadata fetch operations.  
   | No| 6   |
+| hive.kafka.metadata.poll.timeout.ms| Number of milliseconds

[hive] branch master updated: HIVE-23629: Enforce clean findbugs in PRs (#1069)

2020-06-11 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 68f96a4  HIVE-23629: Enforce clean findbugs in PRs (#1069)
68f96a4 is described below

commit 68f96a402472c27e03a7857b739cd35bf4927853
Author: Mustafa İman 
AuthorDate: Thu Jun 11 11:33:58 2020 -0700

HIVE-23629: Enforce clean findbugs in PRs (#1069)

* HIVE-23629: Enforce clean findbugs in PRs

Change-Id: Ided89254e2464cf9a6f5ebfdce5c1f222988d18e

* HIVE-23629: Fix findbugs errors in hive-shims-common

Change-Id: I8f3d6e321244b65e27b2e3d0f30c8eb2e39778c1
---
 Jenkinsfile|  7 +++
 .../main/java/org/apache/hadoop/fs/ProxyFileSystem.java|  4 +---
 .../java/org/apache/hadoop/fs/ProxyLocalFileSystem.java|  1 -
 .../src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java |  2 +-
 .../java/org/apache/hadoop/hive/shims/CombineHiveKey.java  | 14 ++
 .../org/apache/hadoop/hive/shims/HadoopShimsSecure.java|  3 ---
 6 files changed, 23 insertions(+), 8 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 37ca448..c7dbb05 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -168,6 +168,13 @@ jobWrappers {
   stage('Checkout') {
 checkout scm
   }
+  stage('Prechecks') {
+def findbugsProjects = [
+":hive-shims-aggregator",
+":hive-shims-common"
+]
+buildHive("-Pfindbugs -pl " + findbugsProjects.join(",") + " -am 
compile findbugs:check")
+  }
   stage('Compile') {
 buildHive("install -Dtest=noMatches")
   }
diff --git 
a/shims/common/src/main/java/org/apache/hadoop/fs/ProxyFileSystem.java 
b/shims/common/src/main/java/org/apache/hadoop/fs/ProxyFileSystem.java
index 9e52ebf..7d1d6dd 100644
--- a/shims/common/src/main/java/org/apache/hadoop/fs/ProxyFileSystem.java
+++ b/shims/common/src/main/java/org/apache/hadoop/fs/ProxyFileSystem.java
@@ -42,7 +42,6 @@ public class ProxyFileSystem extends FilterFileSystem {
 
   protected String realScheme;
   protected String realAuthority;
-  protected URI realUri;
 
 
 
@@ -103,8 +102,7 @@ public class ProxyFileSystem extends FilterFileSystem {
 
 URI realUri = fs.getUri();
 this.realScheme = realUri.getScheme();
-this.realAuthority=realUri.getAuthority();
-this.realUri = realUri;
+this.realAuthority = realUri.getAuthority();
 
 this.myScheme = myUri.getScheme();
 this.myAuthority=myUri.getAuthority();
diff --git 
a/shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java 
b/shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
index 8d94bbc..83bb39b 100644
--- a/shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
+++ b/shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
@@ -58,7 +58,6 @@ public class ProxyLocalFileSystem extends FilterFileSystem {
 // the scheme/authority serving as the proxy is derived
 // from the supplied URI
 this.scheme = name.getScheme();
-String nameUriString = name.toString();
 
 String authority = name.getAuthority() != null ? name.getAuthority() : "";
 String proxyUriString = scheme + "://" + authority + "/";
diff --git 
a/shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java 
b/shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java
index e59eb32..adf4d41 100644
--- a/shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java
+++ b/shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java
@@ -176,7 +176,7 @@ public class HdfsUtils {
 Iterables.removeIf(entries, new Predicate() {
   @Override
   public boolean apply(AclEntry input) {
-if (input.getName() == null) {
+if (input != null && input.getName() == null) {
   return true;
 }
 return false;
diff --git 
a/shims/common/src/main/java/org/apache/hadoop/hive/shims/CombineHiveKey.java 
b/shims/common/src/main/java/org/apache/hadoop/hive/shims/CombineHiveKey.java
index 859b637..6eb83b8 100644
--- 
a/shims/common/src/main/java/org/apache/hadoop/hive/shims/CombineHiveKey.java
+++ 
b/shims/common/src/main/java/org/apache/hadoop/hive/shims/CombineHiveKey.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.hive.shims;
 import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
+import java.util.Objects;
 
 import org.apache.hadoop.io.WritableComparable;
 
@@ -51,4 +52,17 @@ public class CombineHiveKey implements WritableComparable {
 assert false;
 return 0;
   }
+
+  @Override
+  public boolean equals(Object o) {
+if (this == o) return true;
+if (o == null || g

[hive] branch master updated: disable RetryTest

2020-06-11 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 3e3ae2f  disable RetryTest
3e3ae2f is described below

commit 3e3ae2f477fc2ea8a9f6541b395dd75404689f8a
Author: Zoltan Haindrich 
AuthorDate: Thu Jun 11 15:22:21 2020 +

disable RetryTest
---
 .../src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java
 
b/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java
index 8cff68d..dc60092 100644
--- 
a/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java
+++ 
b/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java
@@ -20,10 +20,12 @@ package org.apache.hadoop.hive.metastore.utils;
 
 import org.junit.Assert;
 import org.junit.Test;
+import org.junit.Ignore;
 
 /**
  * Tests for retriable interface.
  */
+@Ignore("unstable HIVE-23677")
 public class RetryTest {
   @Test
   public void testRetrySuccess() {



[hive] branch master updated: HIVE-23563: Early abort the build in case new commits are added to the PR (#1089)

2020-06-11 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new f59e36b  HIVE-23563: Early abort the build in case new commits are 
added to the PR (#1089)
f59e36b is described below

commit f59e36be149c3fbd22d9319f5c70e492744f08af
Author: Zoltan Haindrich 
AuthorDate: Thu Jun 11 09:06:45 2020 +0200

HIVE-23563: Early abort the build in case new commits are added to the PR 
(#1089)
---
 Jenkinsfile | 21 -
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 669b57e..37ca448 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -27,6 +27,23 @@ properties([
 ])
 ])
 
+this.prHead = null;
+def checkPrHead() {
+  if(env.CHANGE_ID) {
+println("checkPrHead - prHead:" + prHead)
+println("checkPrHead - prHead2:" + pullRequest.head)
+if (prHead == null) {
+  prHead = pullRequest.head;
+} else {
+  if(prHead != pullRequest.head) {
+currentBuild.result = 'ABORTED'
+error('Found new changes on PR; aborting current build')
+  }
+}
+  }
+}
+checkPrHead()
+
 def setPrLabel(String prLabel) {
   if (env.CHANGE_ID) {
def mapping=[
@@ -89,7 +106,7 @@ def hdbPodTemplate(closure) {
 containerTemplate(name: 'hdb', image: 'kgyrtkirk/hive-dev-box:executor', 
ttyEnabled: true, command: 'cat',
 alwaysPullImage: true,
 resourceRequestCpu: '1800m',
-resourceLimitCpu: '3000m',
+resourceLimitCpu: '8000m',
 resourceRequestMemory: '6400Mi',
 resourceLimitMemory: '12000Mi'
 ),
@@ -120,6 +137,7 @@ def jobWrappers(closure) {
 lock(label:'hive-precommit', quantity:1, variable: 'LOCKED_RESOURCE')  {
   timestamps {
 echo env.LOCKED_RESOURCE
+checkPrHead()
 closure()
   }
 }
@@ -153,6 +171,7 @@ jobWrappers {
   stage('Compile') {
 buildHive("install -Dtest=noMatches")
   }
+  checkPrHead()
   stage('Upload') {
 saveWS()
 sh '''#!/bin/bash -e



[hive] branch master updated: HIVE-23631: Use the test target instead of install (#1072)

2020-06-10 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 79ac124  HIVE-23631: Use the test target instead of install (#1072)
79ac124 is described below

commit 79ac124ff20aaf37f9455b7e4fa7c6418ec5fb9f
Author: Zoltan Haindrich 
AuthorDate: Wed Jun 10 19:07:28 2020 +0200

HIVE-23631: Use the test target instead of install (#1072)
---
 Jenkinsfile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 29170fd..669b57e 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -77,6 +77,7 @@ if [ -s inclusions.txt ]; then OPTS+=" 
-Dsurefire.includesFile=$PWD/inclusions.t
 if [ -s exclusions.txt ]; then OPTS+=" 
-Dsurefire.excludesFile=$PWD/exclusions.txt";fi
 mvn $OPTS '''+args+'''
 du -h --max-depth=1
+df -h
 '''
 }
   }
@@ -180,7 +181,7 @@ jobWrappers {
   }
   try {
 stage('Test') {
-  buildHive("install -q")
+  
buildHive("org.apache.maven.plugins:maven-antrun-plugin:run@{define-classpath,setup-test-dirs,setup-metastore-scripts}
 org.apache.maven.plugins:maven-surefire-plugin:test -q")
 }
   } finally {
 stage('Archive') {



[hive] branch master updated: HIVE-23462: Add option to rewrite CUME_DIST to sketch functions (#1031)

2020-06-10 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 160f467  HIVE-23462: Add option to rewrite CUME_DIST to sketch 
functions (#1031)
160f467 is described below

commit 160f467011a489bb27fdd17d228b9120e18f50e6
Author: Zoltan Haindrich 
AuthorDate: Wed Jun 10 13:40:33 2020 +0200

HIVE-23462: Add option to rewrite CUME_DIST to sketch functions (#1031)
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |   13 +-
 .../hadoop/hive/ql/exec/DataSketchesFunctions.java |   72 +-
 .../hive/ql/optimizer/calcite/HiveRelBuilder.java  |7 +
 .../rules/HiveRewriteToDataSketchesRules.java  |  290 +-
 .../calcite/translator/SqlFunctionConverter.java   |3 +-
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |   13 +-
 .../hive/ql/parse/type/HiveFunctionHelper.java |3 +-
 .../hive/ql/exec/TestDataSketchesFunctions.java|   38 +
 .../sketches_materialized_view_cume_dist.q |   54 +
 .../clientpositive/sketches_rewrite_cume_dist.q|   47 +
 .../sketches_rewrite_cume_dist_partition_by.q  |   27 +
 .../clientpositive/llap/cbo_rp_windowing_2.q.out   |  104 +-
 .../sketches_materialized_view_cume_dist.q.out | 1054 
 .../llap/sketches_rewrite_cume_dist.q.out  |  775 ++
 .../sketches_rewrite_cume_dist_partition_by.q.out  |  258 +
 15 files changed, 2635 insertions(+), 123 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 085ab4a..8cdb2eb 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -2495,19 +2495,22 @@ public class HiveConf extends Configuration {
 
HIVE_OPTIMIZE_BI_REWRITE_COUNTDISTINCT_ENABLED("hive.optimize.bi.rewrite.countdistinct.enabled",
 true,
 "Enables to rewrite COUNT(DISTINCT(X)) queries to be rewritten to use 
sketch functions."),
-HIVE_OPTIMIZE_BI_REWRITE_COUNT_DISTINCT_SKETCH(
-"hive.optimize.bi.rewrite.countdistinct.sketch", "hll",
+
HIVE_OPTIMIZE_BI_REWRITE_COUNT_DISTINCT_SKETCH("hive.optimize.bi.rewrite.countdistinct.sketch",
 "hll",
 new StringSet("hll"),
 "Defines which sketch type to use when rewriting COUNT(DISTINCT(X)) 
expressions. "
 + "Distinct counting can be done with: hll"),
 
HIVE_OPTIMIZE_BI_REWRITE_PERCENTILE_DISC_ENABLED("hive.optimize.bi.rewrite.percentile_disc.enabled",
 true,
 "Enables to rewrite PERCENTILE_DISC(X) queries to be rewritten to use 
sketch functions."),
-HIVE_OPTIMIZE_BI_REWRITE_PERCENTILE_DISC_SKETCH(
-"hive.optimize.bi.rewrite.percentile_disc.sketch", "kll",
+
HIVE_OPTIMIZE_BI_REWRITE_PERCENTILE_DISC_SKETCH("hive.optimize.bi.rewrite.percentile_disc.sketch",
 "kll",
 new StringSet("kll"),
 "Defines which sketch type to use when rewriting PERCENTILE_DISC 
expressions. Options: kll"),
-
+
HIVE_OPTIMIZE_BI_REWRITE_CUME_DIST_ENABLED("hive.optimize.bi.rewrite.cume_dist.enabled",
+true,
+"Enables to rewrite CUME_DIST(X) queries to be rewritten to use sketch 
functions."),
+
HIVE_OPTIMIZE_BI_REWRITE_CUME_DIST_SKETCH("hive.optimize.bi.rewrite.cume_dist.sketch",
 "kll",
+new StringSet("kll"),
+"Defines which sketch type to use when rewriting CUME_DIST 
expressions. Options: kll"),
 
 // Statistics
 HIVE_STATS_ESTIMATE_STATS("hive.stats.estimate", true,
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
index cc48d5b..3a450a9 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
@@ -19,6 +19,8 @@
 package org.apache.hadoop.hive.ql.exec;
 
 import java.lang.reflect.Method;
+import java.lang.reflect.ParameterizedType;
+import java.lang.reflect.Type;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.HashMap;
@@ -61,8 +63,8 @@ public final class DataSketchesFunctions implements 
HiveUDFPlugin {
   private static final String SKETCH_TO_STRING = "stringify";
   private static final String UNION_SKETCH = "union";
   private static final String UNION_SKETCH1 = "union_f";
-  private static final String GET_N = "n";
-  private static final String GET_CDF = "cdf";
+  public static final String GET_N = "n";
+  public static fin

[hive] branch master updated (77187b3 -> 1d65ed8)

2020-06-10 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 77187b3  ignore TestJdbcWithMiniLlapVectorArrow
 add 1d65ed8  HIVE-23482: Use junit5 to execute tests (#1043)

No new revisions were added by this update.

Summary of changes:
 beeline/pom.xml| 10 ++
 cli/pom.xml| 10 ++
 common/pom.xml | 10 ++
 contrib/pom.xml| 10 ++
 hbase-handler/pom.xml  | 10 ++
 hcatalog/server-extensions/pom.xml | 10 ++
 hcatalog/webhcat/svr/pom.xml   | 10 ++
 hplsql/pom.xml | 10 ++
 jdbc-handler/pom.xml   | 10 ++
 jdbc/pom.xml   | 10 ++
 kudu-handler/pom.xml   | 10 ++
 llap-client/pom.xml| 10 ++
 llap-common/pom.xml| 10 ++
 llap-ext-client/pom.xml| 10 ++
 llap-server/pom.xml| 10 ++
 llap-tez/pom.xml   | 10 ++
 metastore/pom.xml  | 10 ++
 pom.xml| 21 +--
 ql/pom.xml | 10 ++
 service-rpc/pom.xml| 10 ++
 service/pom.xml| 10 ++
 standalone-metastore/metastore-server/pom.xml  |  5 +++
 .../metastore-tools/metastore-benchmarks/pom.xml   | 10 +-
 standalone-metastore/metastore-tools/pom.xml   | 41 +-
 .../metastore-tools/tools-common/pom.xml   |  3 +-
 standalone-metastore/pom.xml   | 17 -
 storage-api/pom.xml| 16 -
 streaming/pom.xml  | 10 ++
 testutils/pom.xml  | 10 ++
 29 files changed, 301 insertions(+), 32 deletions(-)



[hive] branch master updated: ignore TestJdbcWithMiniLlapVectorArrow

2020-06-10 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 77187b3  ignore TestJdbcWithMiniLlapVectorArrow
77187b3 is described below

commit 77187b3e621d6d35634338e72b4006aaf8ba22fb
Author: Zoltan Haindrich 
AuthorDate: Wed Jun 10 08:03:48 2020 +

ignore TestJdbcWithMiniLlapVectorArrow
---
 .../src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java| 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java
 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java
index 85a7ab0..9d0ff2d 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java
@@ -61,6 +61,7 @@ import org.slf4j.LoggerFactory;
 /**
  * TestJdbcWithMiniLlap for Arrow format
  */
+@Ignore("unstable HIVE-23549")
 public class TestJdbcWithMiniLlapArrow extends BaseJdbcWithMiniLlap {
 
   protected static final Logger LOG = 
LoggerFactory.getLogger(TestJdbcWithMiniLlapArrow.class);



[hive] branch master updated (871ee80 -> 59abbff)

2020-06-09 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 871ee80  HIVE-23516: Store hive replication policy execution metrics 
in the relational DB (Aasha Medhi, reviewed by Pravin Kumar Sinha)
 add 59abbff  HIVE-23621: Enforce ASF headers on source files (#1062)

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile   |  8 
 .../src/java/org/apache/hive/http/JMXJsonServlet.java |  2 +-
 .../TestTransactionalValidationListener.java  | 17 +
 .../cache/TestCachedStoreUpdateUsingEvents.java   | 17 +
 .../org/apache/hive/kafka/SingleNodeKafkaCluster.java | 18 ++
 .../main/java/org/apache/hive/kafka/Wikipedia.java| 18 +++---
 .../hadoop/hive/cli/MiniDruidLlapLocalCliDriver.java  | 19 ++-
 .../apache/hadoop/hive/cli/control/SplitSupport.java  | 17 +
 .../hadoop/hive/ql/qoption/QTestReplaceHandler.java   | 17 +
 .../hive/ql/scheduled/QTestScheduledQueryCleaner.java | 17 +
 .../scheduled/QTestScheduledQueryServiceProvider.java | 17 +
 .../cli/control/splitsupport/SplitSupportDummy.java   | 17 +
 .../splitsupport/split0/SplitSupportDummy.java| 17 +
 .../splitsupport/split125/SplitSupportDummy.java  | 17 +
 pom.xml   | 11 ++-
 15 files changed, 219 insertions(+), 10 deletions(-)



[hive] branch master updated: HIVE-23491: Move ParseDriver to parser module (Krisztian Kasa via Zoltan Haindrich)

2020-06-08 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 2006e52  HIVE-23491: Move ParseDriver to parser module (Krisztian Kasa 
via Zoltan Haindrich)
2006e52 is described below

commit 2006e52713508a92fb4d1d28262fd7175eade8b7
Author: Krisztian Kasa 
AuthorDate: Mon Jun 8 12:02:31 2020 +

HIVE-23491: Move ParseDriver to parser module (Krisztian Kasa via Zoltan 
Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../org/apache/hadoop/hive/ql/QTestSyntaxUtil.java |  8 +
 .../java/org/apache/hadoop/hive/ql/QTestUtil.java  |  2 +-
 parser/pom.xml |  6 
 .../apache/hadoop/hive/ql/parse/ParseDriver.java   | 40 ++
 .../hadoop/hive/ql/parse/ParseException.java   |  0
 .../apache/hadoop/hive/ql/parse/ParseResult.java   | 36 +++
 .../org/apache/hadoop/hive/ql/parse/TestIUD.java   | 17 +++--
 .../hadoop/hive/ql/parse/TestMergeStatement.java   |  9 ++---
 .../hadoop/hive/ql/parse/TestParseDriver.java  | 12 +++
 .../hive/ql/parse/TestParseDriverIntervals.java|  2 +-
 .../hive/ql/parse/TestParseWithinGroupClause.java  |  5 +--
 .../parse/TestSQL11ReservedKeyWordsNegative.java   | 22 ++--
 ...mittedCharsInColumnNameCreateTableNegative.java | 15 +++-
 .../parse/positive/TestTransactionStatement.java   | 22 +---
 .../apache/hadoop/hive/ql/parse/ParseUtils.java| 14 +++-
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java |  2 +-
 .../apache/hadoop/hive/ql/parse/TestQBCompact.java |  4 +--
 .../hadoop/hive/ql/parse/TestQBSubQuery.java   |  2 +-
 .../ql/parse/TestReplicationSemanticAnalyzer.java  |  5 +--
 19 files changed, 75 insertions(+), 148 deletions(-)

diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestSyntaxUtil.java 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestSyntaxUtil.java
index c2f7acd..90a52cf 100644
--- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestSyntaxUtil.java
+++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestSyntaxUtil.java
@@ -25,7 +25,6 @@ import java.util.List;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.hive.cli.CliSessionState;
 import org.apache.hadoop.hive.conf.HiveConf;
-import org.apache.hadoop.hive.ql.lockmgr.HiveTxnManager;
 import org.apache.hadoop.hive.ql.parse.ASTNode;
 import org.apache.hadoop.hive.ql.parse.ParseDriver;
 import org.apache.hadoop.hive.ql.processors.AddResourceProcessor;
@@ -103,12 +102,7 @@ public class QTestSyntaxUtil {
   CommandProcessor proc = CommandProcessorFactory.get(tokens, (HiveConf) 
conf);
   if (proc instanceof IDriver) {
 try {
-  Context ctx = new Context(conf);
-  HiveTxnManager queryTxnMgr = SessionState.get().initTxnMgr(conf);
-  ctx.setHiveTxnManager(queryTxnMgr);
-  ctx.setCmd(cmd);
-  ctx.setHDFSCleanup(true);
-  tree = pd.parse(cmd, ctx);
+  tree = pd.parse(cmd, conf).getTree();
   qTestUtil.analyzeAST(tree);
 } catch (Exception e) {
   return false;
diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
index f7c21a0..3268015 100644
--- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
+++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
@@ -980,7 +980,7 @@ public class QTestUtil {
   }
 
   public ASTNode parseQuery(String tname) throws Exception {
-return pd.parse(qMap.get(tname));
+return pd.parse(qMap.get(tname)).getTree();
   }
 
   public List> analyzeAST(ASTNode ast) throws Exception {
diff --git a/parser/pom.xml b/parser/pom.xml
index 0edae27..41fee3b 100644
--- a/parser/pom.xml
+++ b/parser/pom.xml
@@ -56,6 +56,12 @@
   3.2.1
   test
 
+
+  junit
+  junit
+  ${junit.version}
+  test
+
   
 
   
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java 
b/parser/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java
similarity index 88%
rename from ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java
rename to parser/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java
index 46f1ec0..121dbaf 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java
+++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java
@@ -32,8 +32,6 @@ import org.apache.hadoop.conf.Configuration;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import org.apache.hadoop.hive.ql.Context;
-
 /**
  * ParseDriver.
  *
@@ -93,14 +91,9 @@ public class ParseDriver {
 }
   };
 
-  public ASTNode parse(String command) throws ParseException {
+  public ParseResult parse(String command) throws ParseExcept

[hive] branch master updated: put TestTxnHandler#allocateNextWriteIdRetriesAfterDetectingConflictingConcurrentInsert on ignore; raise some timeouts

2020-06-07 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 9d943c3  put 
TestTxnHandler#allocateNextWriteIdRetriesAfterDetectingConflictingConcurrentInsert
 on ignore; raise some timeouts
9d943c3 is described below

commit 9d943c31a6f9e9018e6a6c9eff57fe561b87c815
Author: Zoltan Haindrich 
AuthorDate: Sun Jun 7 09:45:51 2020 +

put 
TestTxnHandler#allocateNextWriteIdRetriesAfterDetectingConflictingConcurrentInsert
 on ignore; raise some timeouts
---
 .../hadoop/hive/metastore/txn/TestTxnHandler.java  |  1 +
 .../hive/ql/stats/TestStatsUpdaterThread.java  | 24 +++---
 2 files changed, 13 insertions(+), 12 deletions(-)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestTxnHandler.java 
b/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestTxnHandler.java
index 569605f..3d00bf7 100644
--- a/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestTxnHandler.java
+++ b/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestTxnHandler.java
@@ -1756,6 +1756,7 @@ public class TestTxnHandler {
   }
 
   @Test
+  @Ignore("unstable HIVE-23630")
   public void 
allocateNextWriteIdRetriesAfterDetectingConflictingConcurrentInsert() throws 
Exception {
 String dbName = "abc";
 String tableName = "def";
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/stats/TestStatsUpdaterThread.java 
b/ql/src/test/org/apache/hadoop/hive/ql/stats/TestStatsUpdaterThread.java
index afe6070..771ff17 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/stats/TestStatsUpdaterThread.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/stats/TestStatsUpdaterThread.java
@@ -102,7 +102,7 @@ public class TestStatsUpdaterThread {
 executeQuery("drop table simple_stats3");
   }
 
-  @Test(timeout=4)
+  @Test(timeout=8)
   public void testSimpleUpdateWithThreads() throws Exception {
 StatsUpdaterThread su = createUpdater();
 su.startWorkers();
@@ -119,7 +119,7 @@ public class TestStatsUpdaterThread {
 msClient.close();
   }
 
-  @Test(timeout=4)
+  @Test(timeout=8)
   public void testMultipleTables() throws Exception {
 StatsUpdaterThread su = createUpdater();
 IMetaStoreClient msClient = new HiveMetaStoreClient(hiveConf);
@@ -145,7 +145,7 @@ public class TestStatsUpdaterThread {
 msClient.close();
   }
 
-  @Test(timeout=8)
+  @Test(timeout=16)
   public void testTxnTable() throws Exception {
 StatsUpdaterThread su = createUpdater();
 IMetaStoreClient msClient = new HiveMetaStoreClient(hiveConf);
@@ -318,7 +318,7 @@ public class TestStatsUpdaterThread {
 msClient.close();
   }
 
-  @Test(timeout=4)
+  @Test(timeout=8)
   public void testExistingOnly() throws Exception {
 hiveConf.set(MetastoreConf.ConfVars.STATS_AUTO_UPDATE.getVarname(), 
"existing");
 StatsUpdaterThread su = createUpdater();
@@ -340,7 +340,7 @@ public class TestStatsUpdaterThread {
 msClient.close();
   }
 
-  @Test(timeout=8)
+  @Test(timeout=16)
   public void testQueueingWithThreads() throws Exception {
 final int PART_COUNT = 12;
 hiveConf.setInt(MetastoreConf.ConfVars.BATCH_RETRIEVE_MAX.getVarname(), 5);
@@ -371,7 +371,7 @@ public class TestStatsUpdaterThread {
 msClient.close();
   }
 
-  @Test(timeout=4)
+  @Test(timeout=8)
   public void testAllPartitions() throws Exception {
 final int PART_COUNT = 3;
 StatsUpdaterThread su = createUpdater();
@@ -394,7 +394,7 @@ public class TestStatsUpdaterThread {
 msClient.close();
   }
 
-  @Test(timeout=4)
+  @Test(timeout=8)
   public void testPartitionSubset() throws Exception {
 final int NONSTAT_PART_COUNT = 3;
 StatsUpdaterThread su = createUpdater();
@@ -429,7 +429,7 @@ public class TestStatsUpdaterThread {
 msClient.close();
   }
 
-  @Test(timeout=4)
+  @Test(timeout=8)
   public void testPartitionsWithDifferentColsAll() throws Exception {
 StatsUpdaterThread su = createUpdater();
 IMetaStoreClient msClient = new HiveMetaStoreClient(hiveConf);
@@ -458,7 +458,7 @@ public class TestStatsUpdaterThread {
   }
 
 
-  @Test(timeout=45000)
+  @Test(timeout=8)
   public void testPartitionsWithDifferentColsExistingOnly() throws Exception {
 hiveConf.set(MetastoreConf.ConfVars.STATS_AUTO_UPDATE.getVarname(), 
"existing");
 StatsUpdaterThread su = createUpdater();
@@ -494,7 +494,7 @@ public class TestStatsUpdaterThread {
 msClient.close();
   }
 
-  @Test(timeout=4)
+  @Test(timeout=8)
   public void testParallelOps() throws Exception {
 // Set high worker count so we get a longer queue.
 
hiveConf.setInt(MetastoreConf.ConfVars.STATS_AUTO_UPDATE_WORKER_COUNT.getVarname(),
 4);
@@ -545,14 +545,14 @@ public class TestStatsUpdaterThread {
 
   // A table w

[hive] branch master updated: HIVE-23590: Close stale PRs automatically (#1049)

2020-06-05 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 2420ba6  HIVE-23590: Close stale PRs automatically (#1049)
2420ba6 is described below

commit 2420ba624f57eb6c15df8cfc3e4fb0ecbb89473e
Author: Zoltan Haindrich 
AuthorDate: Fri Jun 5 23:23:13 2020 +0200

HIVE-23590: Close stale PRs automatically (#1049)
---
 .asf.yaml   |  4 ++--
 .github/workflows/stale.yml | 22 ++
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/.asf.yaml b/.asf.yaml
index fca520f..64a28ff 100644
--- a/.asf.yaml
+++ b/.asf.yaml
@@ -31,8 +31,8 @@ github:
 projects: false
   enabled_merge_buttons:
 squash:  true
-merge:   true
-rebase:  true
+merge:   false
+rebase:  false
 notifications:
   commits:  commits@hive.apache.org
   issues:   git...@hive.apache.org
diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml
new file mode 100644
index 000..a01246b
--- /dev/null
+++ b/.github/workflows/stale.yml
@@ -0,0 +1,22 @@
+name: "Close stale pull requests"
+on:
+  schedule:
+  - cron: "0 0 * * *"
+
+jobs:
+  stale:
+runs-on: ubuntu-latest
+steps:
+- uses: actions/stale@v3
+  with:
+repo-token: ${{ secrets.GITHUB_TOKEN }}
+stale-pr-message: This pull request has been automatically marked as
+  stale because it has not had recent activity. It will be closed if
+  no further activity occurs.
+
+  Feel free to reach out on the d...@hive.apache.org list if the patch
+  is in need of reviews.
+exempt-pr-labels: 'awaiting-approval,work-in-progress'
+stale-pr-label: stale
+days-before-stale: 60
+days-before-close: 7



[hive] branch master updated: HIVE-23626: Build failure is incorrectly reported as tests passed (#1065)

2020-06-05 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ab07f9f  HIVE-23626: Build failure is incorrectly reported as tests 
passed (#1065)
ab07f9f is described below

commit ab07f9f412336e562f04d8d09f2ec553bda26631
Author: Zoltan Haindrich 
AuthorDate: Fri Jun 5 23:21:03 2020 +0200

HIVE-23626: Build failure is incorrectly reported as tests passed (#1065)
---
 Jenkinsfile | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 65cc65e..5126969 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -113,6 +113,7 @@ spec:
 }
 
 def jobWrappers(closure) {
+  def finalLabel="FAILURE";
   try {
 // allocate 1 precommit token for the execution
 lock(label:'hive-precommit', quantity:1, variable: 'LOCKED_RESOURCE')  {
@@ -121,8 +122,9 @@ def jobWrappers(closure) {
 closure()
   }
 }
+finalLabel=currentBuild.currentResult
   } finally {
-setPrLabel(currentBuild.currentResult)
+setPrLabel(finalLabel)
   }
 }
 



[hive] branch master updated: HIVE-22621: disable orc_merge9

2020-06-05 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new e52eaf6  HIVE-22621: disable orc_merge9
e52eaf6 is described below

commit e52eaf6750691d24719a6778cc68c38ea15cea88
Author: Zoltan Haindrich 
AuthorDate: Fri Jun 5 13:49:39 2020 +

HIVE-22621: disable orc_merge9
---
 ql/src/test/queries/clientpositive/orc_merge9.q | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ql/src/test/queries/clientpositive/orc_merge9.q 
b/ql/src/test/queries/clientpositive/orc_merge9.q
index a662737..4ffbabf 100644
--- a/ql/src/test/queries/clientpositive/orc_merge9.q
+++ b/ql/src/test/queries/clientpositive/orc_merge9.q
@@ -1,3 +1,4 @@
+--! qt:disabled:Found 1/2 error HIVE-23622
 --! qt:dataset:alltypesorc
 
 set hive.vectorized.execution.enabled=false;



[hive] branch master updated: HIVE-23607: Permission Issue: Create view on another view succeeds but alter view fails (#1058)

2020-06-05 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 5d932b5  HIVE-23607: Permission Issue: Create view on another view 
succeeds but alter view fails (#1058)
5d932b5 is described below

commit 5d932b50f1deee723af8e7c5638be754ae9af045
Author: Naresh P R 
AuthorDate: Fri Jun 5 01:24:13 2020 -0700

HIVE-23607: Permission Issue: Create view on another view succeeds but 
alter view fails (#1058)
---
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java |  2 +-
 .../org/apache/hadoop/hive/ql/plan/PlanUtils.java  |  2 +-
 .../apache/hadoop/hive/ql/plan/TestViewEntity.java | 26 ++
 3 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
index 8238a2a..68a43d7 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
@@ -2238,7 +2238,7 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
   // Temporary tables created during the execution are not the input 
sources
   if (!PlanUtils.isValuesTempTable(alias)) {
 PlanUtils.addInput(inputs,
-new ReadEntity(tab, parentViewInfo, parentViewInfo == 
null),mergeIsDirect);
+new ReadEntity(tab, parentViewInfo, parentViewInfo == null), 
mergeIsDirect);
   }
 }
 
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
index 2fb452b..fd3918a 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
@@ -1172,7 +1172,7 @@ public final class PlanUtils {
 
   // Adds tables only for create view (PPD filter can be appended by outer 
query)
   Table table = topOp.getConf().getTableMetadata();
-  PlanUtils.addInput(inputs, new ReadEntity(table, parentViewInfo));
+  PlanUtils.addInput(inputs, new ReadEntity(table, parentViewInfo, 
parentViewInfo == null));
 }
   }
 
diff --git a/ql/src/test/org/apache/hadoop/hive/ql/plan/TestViewEntity.java 
b/ql/src/test/org/apache/hadoop/hive/ql/plan/TestViewEntity.java
index cbf1c83..d3a3cd5 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/plan/TestViewEntity.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/plan/TestViewEntity.java
@@ -271,4 +271,30 @@ public class TestViewEntity {
 
   }
 
+  /**
+   * Verify create/alter view on another view's underlying table is always 
indirect
+   * direct and indirect inputs.
+   * @throws CommandProcessorException
+   */
+  @Test
+  public void alterView() throws CommandProcessorException {
+
+driver.run("create table test_table (id int)");
+driver.run("create view test_view as select * from test_table");
+
+
+driver.compile("create view test_view_1 as select * from test_view", true);
+assertEquals("default@test_view", 
CheckInputReadEntity.readEntities[0].getName());
+assertTrue("default@test_view", 
CheckInputReadEntity.readEntities[0].isDirect());
+assertEquals("default@test_table", 
CheckInputReadEntity.readEntities[1].getName());
+assertFalse("default@test_table", 
CheckInputReadEntity.readEntities[1].isDirect());
+
+driver.run("create view test_view_1 as select * from test_view");
+
+driver.compile("alter view test_view_1 as select * from test_view", true);
+assertEquals("default@test_view", 
CheckInputReadEntity.readEntities[0].getName());
+assertTrue("default@test_view", 
CheckInputReadEntity.readEntities[0].isDirect());
+assertEquals("default@test_table", 
CheckInputReadEntity.readEntities[1].getName());
+assertFalse("default@test_table", 
CheckInputReadEntity.readEntities[1].isDirect());
+  }
 }



[hive] branch master updated: HIVE-23587: Remove JODA Time From LlapServiceDriver (#1045)

2020-06-02 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new c98b6ee  HIVE-23587: Remove JODA Time From LlapServiceDriver (#1045)
c98b6ee is described below

commit c98b6ee37ef4c1472316e0c41af280eef3ec7d1f
Author: belugabehr <12578579+belugab...@users.noreply.github.com>
AuthorDate: Tue Jun 2 15:17:33 2020 -0400

HIVE-23587: Remove JODA Time From LlapServiceDriver (#1045)

Co-authored-by: David Mollitor 
---
 .../org/apache/hadoop/hive/llap/cli/service/LlapServiceDriver.java   | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapServiceDriver.java
 
b/llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapServiceDriver.java
index fea8393..fe743b0 100644
--- 
a/llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapServiceDriver.java
+++ 
b/llap-server/src/java/org/apache/hadoop/hive/llap/cli/service/LlapServiceDriver.java
@@ -37,7 +37,6 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.service.client.ServiceClient;
 import org.apache.hadoop.yarn.service.utils.CoreFileSystem;
-import org.joda.time.DateTime;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -45,6 +44,8 @@ import java.io.File;
 import java.io.IOException;
 import java.net.URL;
 import java.nio.file.Paths;
+import java.time.Instant;
+import java.time.format.DateTimeFormatter;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
@@ -297,7 +298,7 @@ public class LlapServiceDriver {
 int rc;
 String version = System.getenv("HIVE_VERSION");
 if (StringUtils.isEmpty(version)) {
-  version = DateTime.now().toString("ddMMM");
+  version = DateTimeFormatter.BASIC_ISO_DATE.format(Instant.now());
 }
 
 String outputDir = cl.getOutput();



[hive] branch master updated: HIVE-23404: Schedules in the past should be accepted (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

2020-06-02 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 6efbfd6  HIVE-23404: Schedules in the past should be accepted (Zoltan 
Haindrich reviewed by Jesus Camacho Rodriguez)
6efbfd6 is described below

commit 6efbfd63e1b890cd99af30945c08f55ad0c3ed65
Author: Zoltan Haindrich 
AuthorDate: Tue Jun 2 21:13:08 2020 +0200

HIVE-23404: Schedules in the past should be accepted (Zoltan Haindrich 
reviewed by Jesus Camacho Rodriguez)
---
 common/src/java/org/apache/hadoop/hive/conf/Constants.java   | 6 ++
 .../hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java | 4 
 .../apache/hadoop/hive/ql/schq/TestScheduledQueryService.java| 3 ++-
 ql/src/test/queries/clientpositive/schq_past.q   | 9 +
 ql/src/test/results/clientpositive/llap/schq_past.q.out  | 8 
 .../main/java/org/apache/hadoop/hive/metastore/ObjectStore.java  | 6 --
 .../src/main/sql/derby/hive-schema-4.0.0.derby.sql   | 2 +-
 .../src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql  | 2 +-
 8 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/Constants.java 
b/common/src/java/org/apache/hadoop/hive/conf/Constants.java
index 7b2c234..a79be8d 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/Constants.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/Constants.java
@@ -77,4 +77,10 @@ public class Constants {
 
   /**  A named lock is acquired prior to executing the query; enabling to run 
queries in parallel which might interfere with eachother. */
   public static final String HIVE_QUERY_EXCLUSIVE_LOCK = 
"hive.query.exclusive.lock";
+
+  public static final String SCHEDULED_QUERY_NAMESPACE = 
"scheduled.query.namespace";
+  public static final String SCHEDULED_QUERY_SCHEDULENAME = 
"scheduled.query.schedulename";
+  public static final String SCHEDULED_QUERY_EXECUTIONID = 
"scheduled.query.executionid";
+  public static final String SCHEDULED_QUERY_USER = "scheduled.query.user";
+
 }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
index ca12093..3cbaa60 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
@@ -223,6 +223,10 @@ public class ScheduledQueryExecutionService implements 
Closeable {
 HiveConf conf = new HiveConf(context.conf);
 conf.set(Constants.HIVE_QUERY_EXCLUSIVE_LOCK, 
lockNameFor(q.getScheduleKey()));
 conf.setVar(HiveConf.ConfVars.HIVE_AUTHENTICATOR_MANAGER, 
SessionStateUserAuthenticator.class.getName());
+conf.set(Constants.SCHEDULED_QUERY_NAMESPACE, 
q.getScheduleKey().getClusterNamespace());
+conf.set(Constants.SCHEDULED_QUERY_SCHEDULENAME, 
q.getScheduleKey().getScheduleName());
+conf.set(Constants.SCHEDULED_QUERY_USER, q.getUser());
+conf.set(Constants.SCHEDULED_QUERY_EXECUTIONID, 
Long.toString(q.getExecutionId()));
 conf.unset(HiveConf.ConfVars.HIVESESSIONID.varname);
 state = new SessionState(conf, q.getUser());
 state.setIsHiveServerQuery(true);
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/schq/TestScheduledQueryService.java 
b/ql/src/test/org/apache/hadoop/hive/ql/schq/TestScheduledQueryService.java
index dd8da34..ebf37d1 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/schq/TestScheduledQueryService.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/schq/TestScheduledQueryService.java
@@ -90,7 +90,7 @@ public class TestScheduledQueryService {
   private int getNumRowsReturned(IDriver driver, String query) throws 
Exception {
 driver.run(query);
 FetchTask ft = driver.getFetchTask();
-List res = new ArrayList();
+List res = new ArrayList<>();
 if (ft == null) {
   return 0;
 }
@@ -117,6 +117,7 @@ public class TestScheduledQueryService {
   r.setExecutionId(id++);
   r.setQuery(stmt);
   r.setScheduleKey(new ScheduledQueryKey("sch1", getClusterNamespace()));
+  r.setUser("nobody");
   if (id == 1) {
 return r;
   } else {
diff --git a/ql/src/test/queries/clientpositive/schq_past.q 
b/ql/src/test/queries/clientpositive/schq_past.q
new file mode 100644
index 000..735ba03
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/schq_past.q
@@ -0,0 +1,9 @@
+--! qt:authorizer
+--! qt:scheduledqueryservice
+
+set user.name=hive_admin_user;
+set role admin;
+
+-- defining a schedule in the past should be allowed
+create scheduled query ingest cron '0 0 0 1 * ? 2000' defined as select 1;

[hive] branch master updated: HIVE-22942: Replace PTest with an alternative

2020-05-29 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 8443e50  HIVE-22942: Replace PTest with an alternative
8443e50 is described below

commit 8443e50fdfa284531300f3ab283a7e4959dba623
Author: Zoltan Haindrich 
AuthorDate: Fri May 29 10:21:41 2020 +

HIVE-22942: Replace PTest with an alternative

Closes apache/hive#948
---
 Jenkinsfile| 193 +
 .../listener/TestDbNotificationListener.java   |   2 +
 .../parse/TestScheduledReplicationScenarios.java   |   2 +
 .../apache/hive/beeline/TestBeeLineWithArgs.java   |  10 +-
 .../org/apache/hive/jdbc/BaseJdbcWithMiniLlap.java |   4 +-
 .../org/apache/hive/jdbc/TestActivePassiveHA.java  |   4 +-
 .../hive/jdbc/TestJdbcGenericUDTFGetSplits.java|   2 +
 .../hive/jdbc/TestJdbcGenericUDTFGetSplits2.java   |   2 +
 .../apache/hive/jdbc/TestJdbcWithMiniLlapRow.java  |   1 +
 .../hive/jdbc/TestJdbcWithMiniLlapVectorArrow.java |   1 +
 .../hive/jdbc/TestJdbcWithServiceDiscovery.java|   2 +
 .../apache/hive/jdbc/TestNewGetSplitsFormat.java   |   1 +
 .../jdbc/TestNewGetSplitsFormatReturnPath.java |   8 +
 .../jdbc/TestTriggersTezSessionPoolManager.java|   2 +
 itests/qtest/pom.xml   |   2 -
 .../hive/kafka/TransactionalKafkaWriterTest.java   |   2 +
 .../hive/llap/registry/impl/TestSlotZnode.java |   4 +-
 .../llap/daemon/impl/TestTaskExecutorService.java  |  16 +-
 .../authorization_disallow_transform.q |   1 +
 ql/src/test/queries/clientnegative/masking_mv.q|   2 +-
 .../test/queries/clientnegative/strict_pruning.q   |   1 +
 .../test/queries/clientnegative/strict_pruning_2.q |   1 +
 .../clientpositive/authorization_show_grant.q  |   1 +
 .../druid_materialized_view_rewrite_ssb.q  |   1 +
 .../clientpositive/druidkafkamini_delimited.q  |   1 +
 .../clientpositive/merge_test_dummy_operator.q |   1 +
 .../clientpositive/results_cache_invalidation2.q   |   2 +-
 ...schema_evol_par_vec_table_dictionary_encoding.q |   2 +
 ...ma_evol_par_vec_table_non_dictionary_encoding.q |   1 +
 .../special_character_in_tabnames_1.q  |   1 +
 .../queries/clientpositive/stats_list_bucket.q |   1 +
 .../temp_table_multi_insert_partitioned.q  |   1 +
 .../llap/results_cache_invalidation2.q.out |   4 +-
 .../metastore/txn/TestAcidTxnCleanerService.java   |   2 +
 34 files changed, 259 insertions(+), 22 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
new file mode 100644
index 000..65cc65e
--- /dev/null
+++ b/Jenkinsfile
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+properties([
+// max 5 build/branch/day
+rateLimitBuilds(throttle: [count: 5, durationName: 'day', userBoost: 
true]),
+// do not run multiple testruns on the same branch
+disableConcurrentBuilds(),
+parameters([
+string(name: 'SPLIT', defaultValue: '20', description: 'Number of 
buckets to split tests into.'),
+string(name: 'OPTS', defaultValue: '', description: 'additional maven 
opts'),
+])
+])
+
+def setPrLabel(String prLabel) {
+  if (env.CHANGE_ID) {
+   def mapping=[
+"SUCCESS":"tests passed",
+"UNSTABLE":"tests unstable",
+"FAILURE":"tests failed",
+"PENDING":"tests pending",
+   ]
+   def newLabels = []
+   for( String l : pullRequest.labels )
+ newLabels.add(l)
+   for( String l : mapping.keySet() )
+ newLabels.remove(mapping[l])
+   newLabels.add(mapping[prLabel])
+   echo ('' +newLabels)
+   pullRequest.labels=newLabels
+  }
+}
+
+setPrLabel("PENDING");
+
+def executorNode(run) {
+  hdbPodTemplate {
+  node(POD_LABEL) {
+container('hdb') {
+  run()
+}
+}
+  }
+}
+
+def buildHive(args) {
+  configFileProvider([configFile(fileId: 'artifacto

[hive] branch master updated: HIVE-23434: Add option to rewrite PERCENTILE_DISC to sketch functions (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

2020-05-21 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 716f1f9  HIVE-23434: Add option to rewrite PERCENTILE_DISC to sketch 
functions (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)
716f1f9 is described below

commit 716f1f9a945a9a11e6702754667660d27e0a5cf4
Author: Zoltan Haindrich 
AuthorDate: Fri May 22 06:54:20 2020 +

HIVE-23434: Add option to rewrite PERCENTILE_DISC to sketch functions 
(Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

Signed-off-by: Zoltan Haindrich 
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |   9 +-
 .../test/resources/testconfiguration.properties|   4 +-
 .../hadoop/hive/ql/exec/DataSketchesFunctions.java |  20 +-
 .../HiveRewriteCountDistinctToDataSketches.java| 175 --
 .../rules/HiveRewriteToDataSketchesRules.java  | 371 +
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  14 +-
 .../sketches_materialized_view_percentile_disc.q   |  54 +++
 ...rewrite.q => sketches_rewrite_count_distinct.q} |   0
 ...ewrite.q => sketches_rewrite_percentile_disc.q} |   9 +-
 ...etches_materialized_view_percentile_disc.q.out} | 280 
 .../llap/sketches_materialized_view_rollup2.q.out  |   8 +-
 .../llap/sketches_materialized_view_safety.q.out   |   2 +-
 ...q.out => sketches_rewrite_count_distinct.q.out} |   2 +-
 out => sketches_rewrite_percentile_disc.q.out} |  64 ++--
 14 files changed, 643 insertions(+), 369 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index bd884a9..a00d907 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -2492,12 +2492,19 @@ public class HiveConf extends Configuration {
 
HIVE_OPTIMIZE_BI_REWRITE_COUNTDISTINCT_ENABLED("hive.optimize.bi.rewrite.countdistinct.enabled",
 true,
 "Enables to rewrite COUNT(DISTINCT(X)) queries to be rewritten to use 
sketch functions."),
-
 HIVE_OPTIMIZE_BI_REWRITE_COUNT_DISTINCT_SKETCH(
 "hive.optimize.bi.rewrite.countdistinct.sketch", "hll",
 new StringSet("hll"),
 "Defines which sketch type to use when rewriting COUNT(DISTINCT(X)) 
expressions. "
 + "Distinct counting can be done with: hll"),
+
HIVE_OPTIMIZE_BI_REWRITE_PERCENTILE_DISC_ENABLED("hive.optimize.bi.rewrite.percentile_disc.enabled",
+true,
+"Enables to rewrite PERCENTILE_DISC(X) queries to be rewritten to use 
sketch functions."),
+HIVE_OPTIMIZE_BI_REWRITE_PERCENTILE_DISC_SKETCH(
+"hive.optimize.bi.rewrite.percentile_disc.sketch", "kll",
+new StringSet("kll"),
+"Defines which sketch type to use when rewriting PERCENTILE_DISC 
expressions. Options: kll"),
+
 
 // Statistics
 HIVE_STATS_ESTIMATE_STATS("hive.stats.estimate", true,
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index e7c3e43..0d06d02 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -872,9 +872,11 @@ minillaplocal.query.files=\
   schq_ingest.q,\
   sketches_hll.q,\
   sketches_theta.q,\
-  sketches_rewrite.q,\
+  sketches_rewrite_count_distinct.q,\
+  sketches_rewrite_percentile_disc.q,\
   sketches_materialized_view_rollup.q,\
   sketches_materialized_view_rollup2.q,\
+  sketches_materialized_view_percentile_disc.q,\
   sketches_materialized_view_safety.q,\
   table_access_keys_stats.q,\
   temp_table_llap_partitioned.q,\
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
index 8865380..cc48d5b 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
@@ -65,7 +65,7 @@ public final class DataSketchesFunctions implements 
HiveUDFPlugin {
   private static final String GET_CDF = "cdf";
   private static final String GET_PMF = "pmf";
   private static final String GET_QUANTILES = "quantiles";
-  private static final String GET_QUANTILE = "quantile";
+  public static final String GET_QUANTILE = "quantile";
   private static final String GET_RANK = "rank";
   private static final String INTERSECT_SKETCH = "intersect";
   private static final String INTERSECT_SKETCH1 = "intersect_f";
@@ -109,7 +109,8 @@ public final class DataSketchesF

[hive] branch master updated: HIVE-23314: Upgrade to Kudu 1.12 (Zoltan Haindrich reviewed by Miklos Gergely)

2020-05-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new c5c0767  HIVE-23314: Upgrade to Kudu 1.12 (Zoltan Haindrich reviewed 
by Miklos Gergely)
c5c0767 is described below

commit c5c0767b30002e649cf522db6331f8b74828c0b9
Author: Zoltan Haindrich 
AuthorDate: Sun May 17 19:55:19 2020 +

HIVE-23314: Upgrade to Kudu 1.12 (Zoltan Haindrich reviewed by Miklos 
Gergely)

Signed-off-by: Zoltan Haindrich 
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 5d59539..a8513f6 100644
--- a/pom.xml
+++ b/pom.xml
@@ -182,7 +182,7 @@
 1.8
 4.11
 4.0.2
-1.10.0
+1.12.0
 
 0.9.3
 0.9.3-1



[hive] branch master updated (b53a62f -> 5c9fa2a)

2020-05-16 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from b53a62f  HIVE-23376 : Avoid repeated SHA computation in 
GenericUDTFGetSplits for hive-exec jar (Ramesh Kumar via Rajesh Balamohan)
 new b829a26  HIVE-23460: Add qoption to disable qtests (Zoltan Haindrich 
reviewed by László Bodor, Miklos Gergely)
 new d1286f2  HIVE-23396: Many fixes and improvements to stabilize tests 
(Zoltan Haindrich reviewed by Miklos Gergely)
 new 5c9fa2a  HIVE-23374: QueryDisplay must be threadsafe (Zoltan Haindrich 
reviewed by László Bodor)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../src/test/queries/positive/accumulo_joins.q |  1 +
 .../metrics/metrics2/TestCodahaleMetrics.java  |  2 +-
 .../test/resources/testconfiguration.properties| 15 ---
 .../hadoop/hive/cli/control/AbstractCliConfig.java |  2 +-
 .../apache/hadoop/hive/cli/control/CliConfigs.java | 27 ---
 .../hive/cli/control/CoreAccumuloCliDriver.java|  3 +++
 .../hadoop/hive/cli/control/CoreCliDriver.java |  7 +++--
 .../hive/cli/control/CoreNegativeCliDriver.java|  3 +++
 .../hadoop/hive/cli/control/CorePerfCliDriver.java |  3 +++
 .../java/org/apache/hadoop/hive/ql/QTestUtil.java  |  2 ++
 ...ransactional.java => QTestDisabledHandler.java} | 31 ++
 .../org/apache/hadoop/hive/ql/QueryDisplay.java|  7 +++--
 .../hadoop/hive/metastore/txn/TestTxnHandler.java  |  6 ++---
 .../apache/hadoop/hive/ql/metadata/TestHive.java   | 25 +++--
 .../hadoop/hive/ql/metadata/TestHiveRemote.java| 24 -
 .../ql/parse/TestReplicationSemanticAnalyzer.java  | 13 -
 .../clientnegative/authorization_uri_import.q  |  1 +
 .../queries/clientpositive/bucket_map_join_tez1.q  |  2 ++
 ql/src/test/queries/clientpositive/cbo_rp_insert.q |  1 +
 .../test/queries/clientpositive/cbo_rp_lineage2.q  |  1 +
 .../queries/clientpositive/cbo_rp_subq_exists.q|  1 +
 .../test/queries/clientpositive/cbo_rp_subq_in.q   |  1 +
 .../queries/clientpositive/cbo_rp_subq_not_in.q|  1 +
 .../test/queries/clientpositive/cbo_subq_not_in.q  |  1 +
 .../test/queries/clientpositive/constprog_cast.q   |  2 ++
 .../queries/clientpositive/druid_timestamptz.q |  2 ++
 .../test/queries/clientpositive/druidmini_joins.q  |  1 +
 .../queries/clientpositive/druidmini_masking.q |  2 ++
 .../test/queries/clientpositive/fouter_join_ppr.q  |  1 +
 ql/src/test/queries/clientpositive/input31.q   |  6 +
 .../test/queries/clientpositive/load_dyn_part3.q   |  1 +
 .../clientpositive/multi_insert_partitioned.q  |  1 +
 .../test/queries/clientpositive/perf/cbo_query44.q |  1 +
 .../test/queries/clientpositive/perf/cbo_query45.q |  1 +
 .../test/queries/clientpositive/perf/cbo_query67.q |  1 +
 .../test/queries/clientpositive/perf/cbo_query70.q |  1 +
 .../test/queries/clientpositive/perf/cbo_query86.q |  1 +
 ql/src/test/queries/clientpositive/rcfile_merge1.q |  1 +
 .../clientpositive/rfc5424_parser_file_pruning.q   |  1 +
 .../clientpositive/root_dir_external_table.q   |  1 +
 ql/src/test/queries/clientpositive/sample2.q   |  1 +
 ql/src/test/queries/clientpositive/sample4.q   |  1 +
 .../clientpositive/schema_evol_orc_acidvec_part.q  |  1 +
 .../schema_evol_orc_vec_part_llap_io.q |  1 +
 .../queries/clientpositive/stats_filemetadata.q|  1 +
 ql/src/test/queries/clientpositive/tez_smb_1.q |  2 ++
 .../queries/clientpositive/udaf_context_ngrams.q   |  2 ++
 ql/src/test/queries/clientpositive/udaf_corr.q |  2 ++
 .../clientpositive/udaf_histogram_numeric.q|  2 ++
 .../test/queries/clientpositive/union_fast_stats.q |  2 ++
 ql/src/test/queries/clientpositive/union_stats.q   |  1 +
 .../queries/clientpositive/vector_groupby_reduce.q |  2 ++
 .../cli/session/TestSessionManagerMetrics.java |  2 +-
 standalone-metastore/metastore-server/pom.xml  |  2 +-
 .../hadoop/hive/metastore/HiveMetaStore.java   |  6 +++--
 .../hadoop/hive/metastore/MetaStoreTestUtils.java  |  2 +-
 .../hadoop/hive/metastore/TestMarkPartition.java   |  2 +-
 .../hive/metastore/client/MetaStoreClientTest.java | 13 +++--
 58 files changed, 128 insertions(+), 121 deletions(-)
 copy 
itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/{QTestTransactional.java
 => QTestDisabledHandler.java} (65%)



[hive] 02/03: HIVE-23396: Many fixes and improvements to stabilize tests (Zoltan Haindrich reviewed by Miklos Gergely)

2020-05-16 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit d1286f2f71f0da8fbede250a9dfc2a0a17c33f3f
Author: Zoltan Haindrich 
AuthorDate: Sun May 17 06:15:16 2020 +

HIVE-23396: Many fixes and improvements to stabilize tests (Zoltan 
Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../metrics/metrics2/TestCodahaleMetrics.java  |  2 +-
 .../hadoop/hive/metastore/txn/TestTxnHandler.java  |  6 +++---
 .../apache/hadoop/hive/ql/metadata/TestHive.java   | 25 --
 .../hadoop/hive/ql/metadata/TestHiveRemote.java| 24 +++--
 .../ql/parse/TestReplicationSemanticAnalyzer.java  | 13 +--
 .../cli/session/TestSessionManagerMetrics.java |  2 +-
 standalone-metastore/metastore-server/pom.xml  |  2 +-
 .../hadoop/hive/metastore/HiveMetaStore.java   |  6 --
 .../hadoop/hive/metastore/MetaStoreTestUtils.java  |  2 +-
 .../hadoop/hive/metastore/TestMarkPartition.java   |  2 +-
 .../hive/metastore/client/MetaStoreClientTest.java | 13 ---
 11 files changed, 47 insertions(+), 50 deletions(-)

diff --git 
a/common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
 
b/common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
index 9c4e475..85ded7e 100644
--- 
a/common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
+++ 
b/common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
@@ -55,7 +55,7 @@ public class TestCodahaleMetrics {
   private static final Path tmpDir = 
Paths.get(System.getProperty("java.io.tmpdir"));
   private static File jsonReportFile;
   private static MetricRegistry metricRegistry;
-  private static final long REPORT_INTERVAL_MS = 100;
+  private static final long REPORT_INTERVAL_MS = 2000;
 
   @BeforeClass
   public static void setUp() throws Exception {
diff --git 
a/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestTxnHandler.java 
b/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestTxnHandler.java
index 868da0c..f65619e 100644
--- a/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestTxnHandler.java
+++ b/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestTxnHandler.java
@@ -1203,7 +1203,7 @@ public class TestTxnHandler {
   LockRequest req = new LockRequest(components, "me", "localhost");
   LockResponse res = txnHandler.lock(req);
   assertTrue(res.getState() == LockState.ACQUIRED);
-  Thread.sleep(10);
+  Thread.sleep(1000);
   txnHandler.performTimeOuts();
   txnHandler.checkLock(new CheckLockRequest(res.getLockid()));
   fail("Told there was a lock, when it should have timed out.");
@@ -1218,7 +1218,7 @@ public class TestTxnHandler {
 long timeout = txnHandler.setTimeout(1);
 try {
   txnHandler.openTxns(new OpenTxnRequest(503, "me", "localhost"));
-  Thread.sleep(10);
+  Thread.sleep(1000);
   txnHandler.performTimeOuts();
   GetOpenTxnsInfoResponse rsp = txnHandler.getOpenTxnsInfo();
   int numAborted = 0;
@@ -1241,7 +1241,7 @@ public class TestTxnHandler {
   request.setReplPolicy("default.*");
   request.setReplSrcTxnIds(response.getTxn_ids());
   OpenTxnsResponse responseRepl = txnHandler.openTxns(request);
-  Thread.sleep(10);
+  Thread.sleep(1000);
   txnHandler.performTimeOuts();
   GetOpenTxnsInfoResponse rsp = txnHandler.getOpenTxnsInfo();
   int numAborted = 0;
diff --git a/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java 
b/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java
index 5626dbe..49097a0 100755
--- a/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java
@@ -38,9 +38,7 @@ import org.apache.hadoop.hive.metastore.PartitionDropOptions;
 import org.apache.hadoop.hive.metastore.Warehouse;
 import org.apache.hadoop.hive.metastore.api.Database;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
-import org.apache.hadoop.hive.metastore.api.InvalidOperationException;
 import org.apache.hadoop.hive.metastore.api.MetaException;
-import org.apache.hadoop.hive.metastore.api.WMFullResourcePlan;
 import org.apache.hadoop.hive.metastore.api.WMNullableResourcePlan;
 import org.apache.hadoop.hive.metastore.api.WMPool;
 import org.apache.hadoop.hive.metastore.api.WMResourcePlan;
@@ -67,20 +65,15 @@ import org.apache.logging.log4j.core.config.Configuration;
 import org.apache.logging.log4j.core.config.LoggerConfig;
 import org.apache.thrift.protocol.TBinaryProtocol;
 import org.junit.Assert;
-import org.slf4j.LoggerFactory;
-
 import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.Lists;
-
-
 import static org.junit.Assert.

[hive] 01/03: HIVE-23460: Add qoption to disable qtests (Zoltan Haindrich reviewed by László Bodor, Miklos Gergely)

2020-05-16 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit b829a26f98368fee39c750034b85feecd85d0e0a
Author: Zoltan Haindrich 
AuthorDate: Sun May 17 06:14:42 2020 +

HIVE-23460: Add qoption to disable qtests (Zoltan Haindrich reviewed by 
László Bodor, Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../src/test/queries/positive/accumulo_joins.q |  1 +
 .../test/resources/testconfiguration.properties| 15 --
 .../hadoop/hive/cli/control/AbstractCliConfig.java |  2 +-
 .../apache/hadoop/hive/cli/control/CliConfigs.java | 27 ---
 .../hive/cli/control/CoreAccumuloCliDriver.java|  3 ++
 .../hadoop/hive/cli/control/CoreCliDriver.java |  7 ++-
 .../hive/cli/control/CoreNegativeCliDriver.java|  3 ++
 .../hadoop/hive/cli/control/CorePerfCliDriver.java |  3 ++
 .../java/org/apache/hadoop/hive/ql/QTestUtil.java  |  2 +
 .../hive/ql/qoption/QTestDisabledHandler.java  | 54 ++
 .../clientnegative/authorization_uri_import.q  |  1 +
 .../queries/clientpositive/bucket_map_join_tez1.q  |  2 +
 ql/src/test/queries/clientpositive/cbo_rp_insert.q |  1 +
 .../test/queries/clientpositive/cbo_rp_lineage2.q  |  1 +
 .../queries/clientpositive/cbo_rp_subq_exists.q|  1 +
 .../test/queries/clientpositive/cbo_rp_subq_in.q   |  1 +
 .../queries/clientpositive/cbo_rp_subq_not_in.q|  1 +
 .../test/queries/clientpositive/cbo_subq_not_in.q  |  1 +
 .../test/queries/clientpositive/constprog_cast.q   |  2 +
 .../queries/clientpositive/druid_timestamptz.q |  2 +
 .../test/queries/clientpositive/druidmini_joins.q  |  1 +
 .../queries/clientpositive/druidmini_masking.q |  2 +
 .../test/queries/clientpositive/fouter_join_ppr.q  |  1 +
 ql/src/test/queries/clientpositive/input31.q   |  6 +--
 .../test/queries/clientpositive/load_dyn_part3.q   |  1 +
 .../clientpositive/multi_insert_partitioned.q  |  1 +
 .../test/queries/clientpositive/perf/cbo_query44.q |  1 +
 .../test/queries/clientpositive/perf/cbo_query45.q |  1 +
 .../test/queries/clientpositive/perf/cbo_query67.q |  1 +
 .../test/queries/clientpositive/perf/cbo_query70.q |  1 +
 .../test/queries/clientpositive/perf/cbo_query86.q |  1 +
 ql/src/test/queries/clientpositive/rcfile_merge1.q |  1 +
 .../clientpositive/rfc5424_parser_file_pruning.q   |  1 +
 .../clientpositive/root_dir_external_table.q   |  1 +
 ql/src/test/queries/clientpositive/sample2.q   |  1 +
 ql/src/test/queries/clientpositive/sample4.q   |  1 +
 .../clientpositive/schema_evol_orc_acidvec_part.q  |  1 +
 .../schema_evol_orc_vec_part_llap_io.q |  1 +
 .../queries/clientpositive/stats_filemetadata.q|  1 +
 ql/src/test/queries/clientpositive/tez_smb_1.q |  2 +
 .../queries/clientpositive/udaf_context_ngrams.q   |  2 +
 ql/src/test/queries/clientpositive/udaf_corr.q |  2 +
 .../clientpositive/udaf_histogram_numeric.q|  2 +
 .../test/queries/clientpositive/union_fast_stats.q |  2 +
 ql/src/test/queries/clientpositive/union_stats.q   |  1 +
 .../queries/clientpositive/vector_groupby_reduce.q |  2 +
 46 files changed, 118 insertions(+), 50 deletions(-)

diff --git a/accumulo-handler/src/test/queries/positive/accumulo_joins.q 
b/accumulo-handler/src/test/queries/positive/accumulo_joins.q
index 9d93029..05f1b0b 100644
--- a/accumulo-handler/src/test/queries/positive/accumulo_joins.q
+++ b/accumulo-handler/src/test/queries/positive/accumulo_joins.q
@@ -1,3 +1,4 @@
+--! qt:disabled:disabled for a long time now...dont know why
 --! qt:dataset:src
 DROP TABLE users;
 DROP TABLE states;
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index b48889e..2ad66a6 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -3,21 +3,6 @@
 # DO NOT USE minimr, as MR is deprecated and MinimrCliDriver will be removed
 minimr.query.files=doesnotexist.q\
 
-# Tests that are not enabled for CLI Driver
-disabled.query.files=cbo_rp_subq_in.q,\
-  cbo_rp_subq_not_in.q,\
-  cbo_rp_subq_exists.q,\
-  rcfile_merge1.q,\
-  stats_filemetadata.q,\
-  cbo_rp_insert.q,\
-  cbo_rp_lineage2.q,\
-  union_stats.q,\
-  sample2.q,\
-  sample4.q,\
-  root_dir_external_table.q,\
-  input31.q
-
-
 # NOTE: Add tests to minitez only if it is very
 # specific to tez and cannot be added to minillap.
 minitez.query.files.shared=delete_orig_table.q,\
diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/AbstractCliConfig.java
 
b/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/AbstractCliConfig.java
index 353a4aa..060f9b7 100644
--- 
a/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/AbstractCliConfig.java
+++ 
b/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/AbstractCliConfig.java
@@ -130,7 +130,7

[hive] 03/03: HIVE-23374: QueryDisplay must be threadsafe (Zoltan Haindrich reviewed by László Bodor)

2020-05-16 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 5c9fa2acd973c6d7cedeaf82e969714deeb601a6
Author: Zoltan Haindrich 
AuthorDate: Sun May 17 06:15:33 2020 +

HIVE-23374: QueryDisplay must be threadsafe (Zoltan Haindrich reviewed by 
László Bodor)

Signed-off-by: Zoltan Haindrich 
---
 ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java 
b/ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java
index 1aa5be3..0dafb00 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java
@@ -23,7 +23,6 @@ import org.apache.hadoop.hive.ql.exec.TaskResult;
 import org.apache.hadoop.hive.ql.plan.api.StageType;
 
 import java.io.IOException;
-import java.io.Serializable;
 import java.util.*;
 
 import org.apache.hadoop.mapred.Counters;
@@ -54,7 +53,7 @@ public class QueryDisplay {
 
   private final LinkedHashMap tasks = new 
LinkedHashMap();
 
-  public void updateTaskStatus(Task tTask) {
+  public synchronized void updateTaskStatus(Task tTask) {
 if (!tasks.containsKey(tTask.getId())) {
   tasks.put(tTask.getId(), new TaskDisplay(tTask));
 }
@@ -374,11 +373,11 @@ public class QueryDisplay {
 this.queryId = queryId;
   }
 
-  private String returnStringOrUnknown(String s) {
+  private static String returnStringOrUnknown(String s) {
 return s == null ? "UNKNOWN" : s;
   }
 
-  public long getQueryStartTime() {
+  public synchronized long getQueryStartTime() {
 return queryStartTime;
   }
 }



[hive] branch master updated (134f3b2 -> 39faad1)

2020-05-09 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 134f3b2  HIVE-23359 'show tables like' support for SQL wildcard 
characters (% and _) APPENDUM - remove unused imports (Miklos Gergely, reviewed 
by Zoltan Haiandrich)
 new 220351f  HIVE-23369: schq_ingest may run twice during a test execution 
(Zoltan Haindrich reviewed by Miklos Gergely)
 new 39faad1  HIVE-23368: MV rebuild should produce the same view as the 
one configured at creation time (Zoltan Haindrich reviewed by Jesus Camacho 
Rodriguez)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../test/resources/testconfiguration.properties|   1 +
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  12 +-
 ql/src/test/queries/clientpositive/schq_analyze.q  |   4 +-
 ql/src/test/queries/clientpositive/schq_ingest.q   |   4 +-
 .../queries/clientpositive/schq_materialized.q |   4 +-
 .../sketches_materialized_view_safety.q|  38 ++
 .../results/clientpositive/llap/schq_analyze.q.out |   4 +-
 .../results/clientpositive/llap/schq_ingest.q.out  |   4 +-
 .../clientpositive/llap/schq_materialized.q.out|   6 +-
 .../llap/sketches_materialized_view_safety.q.out   | 519 +
 10 files changed, 582 insertions(+), 14 deletions(-)
 create mode 100644 
ql/src/test/queries/clientpositive/sketches_materialized_view_safety.q
 create mode 100644 
ql/src/test/results/clientpositive/llap/sketches_materialized_view_safety.q.out



[hive] 02/02: HIVE-23368: MV rebuild should produce the same view as the one configured at creation time (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

2020-05-09 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 39faad1dae7316b1f29b6c5589a3b8edc54092f7
Author: Zoltan Haindrich 
AuthorDate: Sat May 9 08:50:22 2020 +

HIVE-23368: MV rebuild should produce the same view as the one configured 
at creation time (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

Signed-off-by: Zoltan Haindrich 
---
 .../test/resources/testconfiguration.properties|   1 +
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  12 +-
 .../sketches_materialized_view_safety.q|  38 ++
 .../llap/sketches_materialized_view_safety.q.out   | 519 +
 4 files changed, 569 insertions(+), 1 deletion(-)

diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index 2036f29..cf3bc5c 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -830,6 +830,7 @@ minillaplocal.query.files=\
   sketches_rewrite.q,\
   sketches_materialized_view_rollup.q,\
   sketches_materialized_view_rollup2.q,\
+  sketches_materialized_view_safety.q,\
   table_access_keys_stats.q,\
   temp_table_llap_partitioned.q,\
   tez_bmj_schema_evolution.q,\
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
index bf08306..085de48 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
@@ -1970,7 +1970,8 @@ public class CalcitePlanner extends SemanticAnalyzer {
   HiveExceptRewriteRule.INSTANCE);
 
   //1. Distinct aggregate rewrite
-  if (conf.getBoolVar(ConfVars.HIVE_OPTIMIZE_BI_ENABLED)) {
+
+  if (!isMaterializedViewMaintenance() && 
conf.getBoolVar(ConfVars.HIVE_OPTIMIZE_BI_ENABLED)) {
 // Rewrite to datasketches if enabled
 if 
(conf.getBoolVar(ConfVars.HIVE_OPTIMIZE_BI_REWRITE_COUNTDISTINCT_ENABLED)) {
   String sketchClass = 
conf.getVar(ConfVars.HIVE_OPTIMIZE_BI_REWRITE_COUNT_DISTINCT_SKETCH);
@@ -2106,6 +2107,15 @@ public class CalcitePlanner extends SemanticAnalyzer {
   return basePlan;
 }
 
+/**
+ * Returns true if MV is being loaded, constructed or being rebuilt.
+ */
+private boolean isMaterializedViewMaintenance() {
+  return mvRebuildMode != MaterializationRebuildMode.NONE
+  || ctx.isLoadingMaterializedView()
+  || getQB().isMaterializedView();
+}
+
 private RelNode applyMaterializedViewRewriting(RelOptPlanner planner, 
RelNode basePlan,
 RelMetadataProvider mdProvider, RexExecutor executorProvider) {
   final RelOptCluster optCluster = basePlan.getCluster();
diff --git 
a/ql/src/test/queries/clientpositive/sketches_materialized_view_safety.q 
b/ql/src/test/queries/clientpositive/sketches_materialized_view_safety.q
new file mode 100644
index 000..620cbb7
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/sketches_materialized_view_safety.q
@@ -0,0 +1,38 @@
+--! qt:transactional
+set hive.fetch.task.conversion=none;
+set hive.optimize.bi.enabled=true;
+
+create table sketch_input (id int, category char(1))
+STORED AS ORC
+TBLPROPERTIES ('transactional'='true');
+
+insert into table sketch_input values
+  (1,'a'),(1, 'a'), (2, 'a'), (3, 'a'), (4, 'a'), (5, 'a'), (6, 'a'), (7, 
'a'), (8, 'a'), (9, 'a'), (10, 'a'),
+  (6,'b'),(6, 'b'), (7, 'b'), (8, 'b'), (9, 'b'), (10, 'b'), (11, 'b'), (12, 
'b'), (13, 'b'), (14, 'b'), (15, 'b')
+; 
+
+explain
+create  materialized view mv_1 as
+  select 'no-rewrite-may-happen',category, count(distinct id) from 
sketch_input group by category;
+create  materialized view mv_1 as
+  select 'no-rewrite-may-happen',category, count(distinct id) from 
sketch_input group by category;
+
+insert into table sketch_input values
+  (1,'a'),(1, 'a'), (2, 'a'), (3, 'a'), (4, 'a'), (5, 'a'), (6, 'a'), (7, 
'a'), (8, 'a'), (9, 'a'), (10, 'a'),
+  (6,'b'),(6, 'b'), (7, 'b'), (8, 'b'), (9, 'b'), (10, 'b'), (11, 'b'), (12, 
'b'), (13, 'b'), (14, 'b'), (15, 'b')
+;
+
+explain
+alter materialized view mv_1 rebuild;
+alter materialized view mv_1 rebuild;
+
+-- see if we use the mv
+explain
+select 'rewritten;mv not used',category, count(distinct id) from sketch_input 
group by category;
+select 

[hive] 01/02: HIVE-23369: schq_ingest may run twice during a test execution (Zoltan Haindrich reviewed by Miklos Gergely)

2020-05-09 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 220351f23c79bae8849f8c00cc59bb8ebb18b6ca
Author: Zoltan Haindrich 
AuthorDate: Sat May 9 08:50:07 2020 +

HIVE-23369: schq_ingest may run twice during a test execution (Zoltan 
Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 ql/src/test/queries/clientpositive/schq_analyze.q   | 4 ++--
 ql/src/test/queries/clientpositive/schq_ingest.q| 4 ++--
 ql/src/test/queries/clientpositive/schq_materialized.q  | 4 ++--
 ql/src/test/results/clientpositive/llap/schq_analyze.q.out  | 4 ++--
 ql/src/test/results/clientpositive/llap/schq_ingest.q.out   | 4 ++--
 ql/src/test/results/clientpositive/llap/schq_materialized.q.out | 6 +++---
 6 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/ql/src/test/queries/clientpositive/schq_analyze.q 
b/ql/src/test/queries/clientpositive/schq_analyze.q
index 246a215..7d8fa8b 100644
--- a/ql/src/test/queries/clientpositive/schq_analyze.q
+++ b/ql/src/test/queries/clientpositive/schq_analyze.q
@@ -16,8 +16,8 @@ insert into t values (1),(2),(3);
 -- basic stats show that the table has "0" rows
 desc formatted t;
 
--- create a schedule to compute stats
-create scheduled query t_analyze cron '0 */1 * * * ? *' as analyze table t 
compute statistics for columns;
+-- create a schedule to compute stats in the far future
+create scheduled query t_analyze cron '0 0 0 1 * ? 2030' as analyze table t 
compute statistics for columns;
 
 alter scheduled query t_analyze execute;
 
diff --git a/ql/src/test/queries/clientpositive/schq_ingest.q 
b/ql/src/test/queries/clientpositive/schq_ingest.q
index 8ffc722..2357e7e 100644
--- a/ql/src/test/queries/clientpositive/schq_ingest.q
+++ b/ql/src/test/queries/clientpositive/schq_ingest.q
@@ -26,8 +26,8 @@ join t_offset on id>=offset) s1
 insert into t select id,cnt where not first
 insert overwrite table t_offset select max(s1.id);
  
--- configure to run ingestion every 10 minutes
-create scheduled query ingest every 10 minutes defined as
+-- configure to run ingestion - in the far future
+create scheduled query ingest cron '0 0 0 1 * ? 2030' defined as
 from (select id==offset as first,* from s
 join t_offset on id>=offset) s1
 insert into t select id,cnt where not first
diff --git a/ql/src/test/queries/clientpositive/schq_materialized.q 
b/ql/src/test/queries/clientpositive/schq_materialized.q
index 46b725e..f629bdf 100644
--- a/ql/src/test/queries/clientpositive/schq_materialized.q
+++ b/ql/src/test/queries/clientpositive/schq_materialized.q
@@ -59,8 +59,8 @@ SELECT empid, deptname FROM emps
 JOIN depts ON (emps.deptno = depts.deptno)
 WHERE hire_date >= '2018-01-01';
 
--- create a schedule to rebuild mv
-create scheduled query d cron '0 0 * * * ? *' defined as 
+-- create a schedule to rebuild mv (in the far future)
+create scheduled query d cron '0 0 0 1 * ? 2030' defined as 
   alter materialized view mv1 rebuild;
 
 set hive.support.quoted.identifiers=none;
diff --git a/ql/src/test/results/clientpositive/llap/schq_analyze.q.out 
b/ql/src/test/results/clientpositive/llap/schq_analyze.q.out
index a083479..4824557 100644
--- a/ql/src/test/results/clientpositive/llap/schq_analyze.q.out
+++ b/ql/src/test/results/clientpositive/llap/schq_analyze.q.out
@@ -53,9 +53,9 @@ Bucket Columns:   []
 Sort Columns:  []   
 Storage Desc Params:
serialization.format1   
-PREHOOK: query: create scheduled query t_analyze cron '0 */1 * * * ? *' as 
analyze table t compute statistics for columns
+PREHOOK: query: create scheduled query t_analyze cron '0 0 0 1 * ? 2030' as 
analyze table t compute statistics for columns
 PREHOOK: type: CREATE SCHEDULED QUERY
-POSTHOOK: query: create scheduled query t_analyze cron '0 */1 * * * ? *' as 
analyze table t compute statistics for columns
+POSTHOOK: query: create scheduled query t_analyze cron '0 0 0 1 * ? 2030' as 
analyze table t compute statistics for columns
 POSTHOOK: type: CREATE SCHEDULED QUERY
 PREHOOK: query: alter scheduled query t_analyze execute
 PREHOOK: type: ALTER SCHEDULED QUERY
diff --git a/ql/src/test/results/clientpositive/llap/schq_ingest.q.out 
b/ql/src/test/results/clientpositive/llap/schq_ingest.q.out
index 19d2b11..8e5c123 100644
--- a/ql/src/test/results/clientpositive/llap/schq_ingest.q.out
+++ b/ql/src/test/results/clientpositive/llap/schq_ingest.q.out
@@ -76,13 +76,13 @@ POSTHOOK: Lineage: t.cnt SIMPLE [(s)s.FieldSchema(name:cnt, 
type:int, comment:nu
 POSTHOOK: Lineage: t.id SIMPLE [(s)s.FieldSchema(name:id, type:int, 
comment:null), ]
 POSTHOOK: Lineage: t_offset.offset EXPRESSION [(s)s.FieldSchema(name:id, 
type:int, comment:null), ]
 Warni

[hive] branch master updated (bbfb0f8 -> ca9aba6)

2020-05-07 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from bbfb0f8  HIVE-23124: Review of SQLOperation Class (David Mollitor, 
reviewed by Peter Vary)
 add ca9aba6  HIVE-23371: StatsUtils.getConstValue may log misleading 
exception (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

No new revisions were added by this update.

Summary of changes:
 .../apache/hadoop/hive/ql/stats/StatsUtils.java| 42 +++---
 1 file changed, 22 insertions(+), 20 deletions(-)



[hive] branch master updated: HIVE-23323: Add qsplits profile (Zoltan Haindrich reviewed by Miklos Gergely)

2020-05-05 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 772bfda  HIVE-23323: Add qsplits profile (Zoltan Haindrich reviewed by 
Miklos Gergely)
772bfda is described below

commit 772bfdadab6f3d8c3a54431fe7b6e8b115e99b29
Author: Zoltan Haindrich 
AuthorDate: Tue May 5 15:35:31 2020 +

HIVE-23323: Add qsplits profile (Zoltan Haindrich reviewed by Miklos 
Gergely)

Signed-off-by: Zoltan Haindrich 
---
 itests/bin/generate-cli-splits.sh  | 26 
 itests/qtest-spark/pom.xml | 50 +++-
 .../hive/cli/TestMiniSparkOnYarnCliDriver.java |  7 ++-
 .../apache/hadoop/hive/cli/TestSparkCliDriver.java |  7 ++-
 itests/qtest/pom.xml   | 51 +++-
 .../org/apache/hadoop/hive/cli/TestCliDriver.java  |  5 +-
 .../hive/cli/TestEncryptedHDFSCliDriver.java   |  5 +-
 .../hadoop/hive/cli/TestMiniLlapCliDriver.java |  5 +-
 .../hive/cli/TestMiniLlapLocalCliDriver.java   |  5 +-
 .../hadoop/hive/cli/control/SplitSupport.java  | 69 ++
 .../hadoop/hive/cli/control/TestSplitSupport.java  | 48 +++
 .../control/splitsupport/SplitSupportDummy.java|  5 ++
 .../splitsupport/split0/SplitSupportDummy.java |  5 ++
 .../splitsupport/split125/SplitSupportDummy.java   |  5 ++
 14 files changed, 281 insertions(+), 12 deletions(-)

diff --git a/itests/bin/generate-cli-splits.sh 
b/itests/bin/generate-cli-splits.sh
new file mode 100755
index 000..c16d369
--- /dev/null
+++ b/itests/bin/generate-cli-splits.sh
@@ -0,0 +1,26 @@
+#!/bin/bash
+
+usage() {
+   echo "$0  "
+   exit 1
+}
+
+[ "$1" == "" ] && usage
+[ "$2" == "" ] && usage
+
+
+inDir="$1"
+outDir="$2"
+
+git grep SplitSupport.process | grep "$1" | cut -d ':' -f1 | while read f;do
+
+   echo "processing: $f"
+   n="`grep N_SPLITS "$f" | cut -d= -f2 | tr -c -d '0-9'`"
+   echo " * nSplits: $n"
+
+   for((i=0;i 
$oDir/`basename $f`
+   }
+done
diff --git a/itests/qtest-spark/pom.xml b/itests/qtest-spark/pom.xml
index 60d032d..4f97e29 100644
--- a/itests/qtest-spark/pom.xml
+++ b/itests/qtest-spark/pom.xml
@@ -417,5 +417,53 @@
   
 
   
-
+  
+
+  qsplits
+  
+
+  
+org.apache.maven.plugins
+maven-antrun-plugin
+
+  
+generate-split-tests
+generate-sources
+
+  
+
+  
+  
+  
+
+  
+
+
+  run
+
+  
+
+  
+  
+org.codehaus.mojo
+build-helper-maven-plugin
+
+  
+add-test-source
+generate-sources
+
+  add-test-source
+
+
+  
+target/generated-test-sources
+  
+
+  
+
+  
+
+  
+
+  
 
diff --git 
a/itests/qtest-spark/src/test/java/org/apache/hadoop/hive/cli/TestMiniSparkOnYarnCliDriver.java
 
b/itests/qtest-spark/src/test/java/org/apache/hadoop/hive/cli/TestMiniSparkOnYarnCliDriver.java
index 889d0f2..c19d4db 100644
--- 
a/itests/qtest-spark/src/test/java/org/apache/hadoop/hive/cli/TestMiniSparkOnYarnCliDriver.java
+++ 
b/itests/qtest-spark/src/test/java/org/apache/hadoop/hive/cli/TestMiniSparkOnYarnCliDriver.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
- 
+
 package org.apache.hadoop.hive.cli;
 
 import java.io.File;
@@ -23,6 +23,7 @@ import java.util.List;
 
 import org.apache.hadoop.hive.cli.control.CliAdapter;
 import org.apache.hadoop.hive.cli.control.CliConfigs;
+import org.apache.hadoop.hive.cli.control.SplitSupport;
 import org.junit.ClassRule;
 import org.junit.Rule;
 import org.junit.Test;
@@ -34,11 +35,13 @@ import org.junit.runners.Parameterized.Parameters;
 @RunWith(Parameterized.class)
 public class TestMiniSparkOnYarnCliDriver {
 
+  private static final int N_SPLITS = 5;
+
   static CliAdapter adapter = new 
CliConfigs.SparkOnYarnCliConfig().getCliAdapter();
 
   @Parameters(name = "{0}")
   public static List getParameters() throws Exception {
-return adapter.getParameters();
+return SplitSupport.process(adapter.getParameters(), 
TestMiniSparkOnYarnCliDriver.class, N_SPLITS);
   }
 
   @Class

[hive] branch master updated: HIVE-23310: Add .asf.yaml

2020-04-30 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ac2e1cd  HIVE-23310: Add .asf.yaml
ac2e1cd is described below

commit ac2e1cda88f4f1db1555889f83f9b56ca6e428ee
Author: Zoltan Haindrich 
AuthorDate: Thu Apr 30 18:11:19 2020 +

HIVE-23310: Add .asf.yaml

Signed-off-by: Zoltan Haindrich 
---
 .asf.yaml | 40 
 1 file changed, 40 insertions(+)

diff --git a/.asf.yaml b/.asf.yaml
new file mode 100644
index 000..fca520f
--- /dev/null
+++ b/.asf.yaml
@@ -0,0 +1,40 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+github:
+  description: "Apache Hive"
+  homepage: https://hive.apache.org/
+  labels:
+- hive
+- java
+- database
+- sql
+- apache
+- big-data
+- hadoop
+  features:
+wiki: false
+issues: false
+projects: false
+  enabled_merge_buttons:
+squash:  true
+merge:   true
+rebase:  true
+notifications:
+  commits:  commits@hive.apache.org
+  issues:   git...@hive.apache.org
+  pullrequests: git...@hive.apache.org
+  jira_options: link label worklog



[hive] branch master updated: HIVE-23031: Add option to enable transparent rewrite of count(distinct) into sketch functions (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

2020-04-30 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new dc6e477  HIVE-23031: Add option to enable transparent rewrite of 
count(distinct) into sketch functions (Zoltan Haindrich reviewed by Jesus 
Camacho Rodriguez)
dc6e477 is described below

commit dc6e4771ba2160263490b0fc708da51f1a8c628d
Author: Zoltan Haindrich 
AuthorDate: Thu Apr 30 16:02:27 2020 +

HIVE-23031: Add option to enable transparent rewrite of count(distinct) 
into sketch functions (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

Signed-off-by: Zoltan Haindrich 
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |  13 +
 .../test/resources/testconfiguration.properties|   2 +
 .../hadoop/hive/ql/exec/DataSketchesFunctions.java | 110 +++-
 .../HiveRewriteCountDistinctToDataSketches.java| 175 ++
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  11 +-
 .../sketches_materialized_view_rollup.q|   7 +-
 .../sketches_materialized_view_rollup2.q   |  54 ++
 .../test/queries/clientpositive/sketches_rewrite.q |  19 +
 .../llap/sketches_materialized_view_rollup2.q.out  | 634 +
 .../clientpositive/llap/sketches_rewrite.q.out | 110 
 10 files changed, 1109 insertions(+), 26 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index b3faf05..829791e 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -2465,6 +2465,19 @@ public class HiveConf extends Configuration {
 "If the number of references to a CTE clause exceeds this threshold, 
Hive will materialize it\n" +
 "before executing the main query block. -1 will disable this 
feature."),
 
+HIVE_OPTIMIZE_BI_ENABLED("hive.optimize.bi.enabled", false,
+"Enables query rewrites based on approximate functions(sketches)."),
+
+
HIVE_OPTIMIZE_BI_REWRITE_COUNTDISTINCT_ENABLED("hive.optimize.bi.rewrite.countdistinct.enabled",
+true,
+"Enables to rewrite COUNT(DISTINCT(X)) queries to be rewritten to use 
sketch functions."),
+
+HIVE_OPTIMIZE_BI_REWRITE_COUNT_DISTINCT_SKETCH(
+"hive.optimize.bi.rewrite.countdistinct.sketch", "hll",
+new StringSet("hll"),
+"Defines which sketch type to use when rewriting COUNT(DISTINCT(X)) 
expressions. "
++ "Distinct counting can be done with: hll"),
+
 // Statistics
 HIVE_STATS_ESTIMATE_STATS("hive.stats.estimate", true,
 "Estimate statistics in absence of statistics."),
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index 48ecc35..c966392 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -820,7 +820,9 @@ minillaplocal.query.files=\
   schq_ingest.q,\
   sketches_hll.q,\
   sketches_theta.q,\
+  sketches_rewrite.q,\
   sketches_materialized_view_rollup.q,\
+  sketches_materialized_view_rollup2.q,\
   table_access_keys_stats.q,\
   temp_table_llap_partitioned.q,\
   tez_bmj_schema_evolution.q,\
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
index eec90c6..8865380 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
@@ -18,21 +18,28 @@
 
 package org.apache.hadoop.hive.ql.exec;
 
+import java.lang.reflect.Method;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Optional;
+
+import org.apache.calcite.jdbc.JavaTypeFactoryImpl;
+import org.apache.calcite.rel.type.RelDataType;
 import org.apache.calcite.rel.type.RelDataTypeImpl;
 import org.apache.calcite.rel.type.RelProtoDataType;
 import org.apache.calcite.sql.SqlFunction;
+import org.apache.calcite.sql.SqlFunctionCategory;
 import org.apache.calcite.sql.SqlKind;
 import org.apache.calcite.sql.type.InferTypes;
 import org.apache.calcite.sql.type.OperandTypes;
 import org.apache.calcite.sql.type.ReturnTypes;
 import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.hadoop.hive.ql.optimizer.calcite.HiveTypeSystemImpl;
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.functions.HiveMergeableAggregate;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveSqlFunction;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFResolver2;
 import org.apa

[hive] branch master updated: HIVE-23317: partition_wise_fileformat15 and 16 tests are flapping because of result order changes (Zoltan Haindrich reviewed by Miklos Gergely)

2020-04-29 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 213de03  HIVE-23317: partition_wise_fileformat15 and 16 tests are 
flapping because of result order changes (Zoltan Haindrich reviewed by Miklos 
Gergely)
213de03 is described below

commit 213de033ab001e38eeb936122dd1d09c73cb
Author: Zoltan Haindrich 
AuthorDate: Wed Apr 29 09:15:52 2020 +

HIVE-23317: partition_wise_fileformat15 and 16 tests are flapping because 
of result order changes (Zoltan Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 ql/src/test/queries/clientpositive/partition_wise_fileformat15.q| 2 +-
 ql/src/test/queries/clientpositive/partition_wise_fileformat16.q| 1 +
 .../test/results/clientpositive/llap/partition_wise_fileformat15.q.out  | 2 +-
 .../test/results/clientpositive/llap/partition_wise_fileformat16.q.out  | 2 +-
 4 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/ql/src/test/queries/clientpositive/partition_wise_fileformat15.q 
b/ql/src/test/queries/clientpositive/partition_wise_fileformat15.q
index 033e123..f908cfd 100644
--- a/ql/src/test/queries/clientpositive/partition_wise_fileformat15.q
+++ b/ql/src/test/queries/clientpositive/partition_wise_fileformat15.q
@@ -1,5 +1,5 @@
 --! qt:dataset:src
---SORT_QUERY_RESULTS
+-- SORT_QUERY_RESULTS
 set hive.input.format = org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
 
 -- This tests that the schema can be changed for binary serde data
diff --git a/ql/src/test/queries/clientpositive/partition_wise_fileformat16.q 
b/ql/src/test/queries/clientpositive/partition_wise_fileformat16.q
index 703b214..da3ae09 100644
--- a/ql/src/test/queries/clientpositive/partition_wise_fileformat16.q
+++ b/ql/src/test/queries/clientpositive/partition_wise_fileformat16.q
@@ -1,4 +1,5 @@
 --! qt:dataset:src
+-- SORT_QUERY_RESULTS
 set hive.input.format = org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
 
 -- This tests that the schema can be changed for binary serde data
diff --git 
a/ql/src/test/results/clientpositive/llap/partition_wise_fileformat15.q.out 
b/ql/src/test/results/clientpositive/llap/partition_wise_fileformat15.q.out
index 87098d2..40eb2bf 100644
--- a/ql/src/test/results/clientpositive/llap/partition_wise_fileformat15.q.out
+++ b/ql/src/test/results/clientpositive/llap/partition_wise_fileformat15.q.out
@@ -133,9 +133,9 @@ POSTHOOK: Input: default@partition_test_partitioned_n6
 POSTHOOK: Input: default@partition_test_partitioned_n6@dt=1
 POSTHOOK: Input: default@partition_test_partitioned_n6@dt=2
  A masked pattern was here 
+172val_86  val_86  2
 476val_238 NULL1
 476val_238 NULL1
-172val_86  val_86  2
 PREHOOK: query: select * from partition_test_partitioned_n6 where dt is not 
null
 PREHOOK: type: QUERY
 PREHOOK: Input: default@partition_test_partitioned_n6
diff --git 
a/ql/src/test/results/clientpositive/llap/partition_wise_fileformat16.q.out 
b/ql/src/test/results/clientpositive/llap/partition_wise_fileformat16.q.out
index 233e229..516d1f9 100644
--- a/ql/src/test/results/clientpositive/llap/partition_wise_fileformat16.q.out
+++ b/ql/src/test/results/clientpositive/llap/partition_wise_fileformat16.q.out
@@ -133,9 +133,9 @@ POSTHOOK: Input: default@partition_test_partitioned_n10
 POSTHOOK: Input: default@partition_test_partitioned_n10@dt=1
 POSTHOOK: Input: default@partition_test_partitioned_n10@dt=2
  A masked pattern was here 
+172val_86  val_86  2
 476val_238 NULL1
 476val_238 NULL1
-172val_86  val_86  2
 PREHOOK: query: select * from partition_test_partitioned_n10 where dt is not 
null
 PREHOOK: type: QUERY
 PREHOOK: Input: default@partition_test_partitioned_n10



[hive] branch branch-3.1 updated (60fdf3d -> d86218d)

2020-04-23 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 60fdf3d  HIVE-22704: Distribution package incorrectly ships the 
upgrade.order files from the metastore module (Zoltan Haindrich reviewed by 
Naveen Gangam)
 add d86218d  HIVE-23088: Using Strings from log4j breaks non-log4j users 
(David Lavati via Panagiotis Garefalakis, Zoltan Haindrich)

No new revisions were added by this update.

Summary of changes:
 .../util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java   | 7 +++
 ql/src/java/org/apache/hadoop/hive/ql/hooks/HookUtils.java| 4 ++--
 service/src/java/org/apache/hive/service/server/HiveServer2.java  | 8 
 3 files changed, 9 insertions(+), 10 deletions(-)



[hive] branch branch-3 updated: HIVE-23088: Using Strings from log4j breaks non-log4j users (David Lavati via Panagiotis Garefalakis, Zoltan Haindrich)

2020-04-23 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-3 by this push:
 new c721046  HIVE-23088: Using Strings from log4j breaks non-log4j users 
(David Lavati via Panagiotis Garefalakis, Zoltan Haindrich)
c721046 is described below

commit c721046923723a351d81aa2f2097654c168826f3
Author: David Lavati 
AuthorDate: Thu Apr 23 15:26:00 2020 +

HIVE-23088: Using Strings from log4j breaks non-log4j users (David Lavati 
via Panagiotis Garefalakis, Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java   | 7 +++
 ql/src/java/org/apache/hadoop/hive/ql/hooks/HookUtils.java| 4 ++--
 service/src/java/org/apache/hive/service/server/HiveServer2.java  | 8 
 3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
index bbcada9..0e41ee9 100644
--- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
+++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
@@ -113,7 +113,6 @@ import org.apache.hadoop.hive.shims.ShimLoader;
 import org.apache.hive.common.util.StreamPrinter;
 import org.apache.hive.druid.MiniDruidCluster;
 import org.apache.hive.kafka.SingleNodeKafkaCluster;
-import org.apache.logging.log4j.util.Strings;
 import org.apache.tools.ant.BuildException;
 import org.apache.zookeeper.WatchedEvent;
 import org.apache.zookeeper.Watcher;
@@ -413,7 +412,7 @@ public class QTestUtil {
 Path userInstallPath;
 if (isLocalFs) {
   String buildDir = System.getProperty(BUILD_DIR_PROPERTY);
-  Preconditions.checkState(Strings.isNotBlank(buildDir));
+  Preconditions.checkState(StringUtils.isNotBlank(buildDir));
   Path path = new Path(fsUriString, buildDir);
 
   // Create a fake fs root for local fs
@@ -2081,7 +2080,7 @@ public class QTestUtil {
 .append(qfiles[i].getName())
 .append(" results check failed with error code ")
 .append(result.getReturnCode());
-if (Strings.isNotEmpty(result.getCapturedOutput())) {
+if (StringUtils.isNotEmpty(result.getCapturedOutput())) {
   builder.append(" and diff value 
").append(result.getCapturedOutput());
 }
 System.err.println(builder.toString());
@@ -2139,7 +2138,7 @@ public class QTestUtil {
 .append(qfiles[i].getName())
 .append(" results check failed with error code ")
 .append(result.getReturnCode());
-if (Strings.isNotEmpty(result.getCapturedOutput())) {
+if (StringUtils.isNotEmpty(result.getCapturedOutput())) {
   builder.append(" and diff value 
").append(result.getCapturedOutput());
 }
 System.err.println(builder.toString());
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/hooks/HookUtils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/hooks/HookUtils.java
index 0841d67..58e95e1 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/hooks/HookUtils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/hooks/HookUtils.java
@@ -21,10 +21,10 @@ package org.apache.hadoop.hive.ql.hooks;
 import java.util.ArrayList;
 import java.util.List;
 
+import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
 import org.apache.hadoop.hive.ql.exec.Utilities;
-import org.apache.logging.log4j.util.Strings;
 
 public class HookUtils {
 
@@ -47,7 +47,7 @@ public class HookUtils {
   throws InstantiationException, IllegalAccessException, 
ClassNotFoundException {
 String csHooks = conf.getVar(hookConfVar);
 List hooks = new ArrayList<>();
-if (Strings.isBlank(csHooks)) {
+if (StringUtils.isBlank(csHooks)) {
   return hooks;
 }
 String[] hookClasses = csHooks.split(",");
diff --git a/service/src/java/org/apache/hive/service/server/HiveServer2.java 
b/service/src/java/org/apache/hive/service/server/HiveServer2.java
index e72ab59..9396068 100644
--- a/service/src/java/org/apache/hive/service/server/HiveServer2.java
+++ b/service/src/java/org/apache/hive/service/server/HiveServer2.java
@@ -111,7 +111,6 @@ import org.apache.http.client.methods.HttpDelete;
 import org.apache.http.impl.client.CloseableHttpClient;
 import org.apache.http.impl.client.HttpClients;
 import org.apache.http.util.EntityUtils;
-import org.apache.logging.log4j.util.Strings;
 import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
 import org.apache.zookeeper.WatchedEvent;
@@ -323,7 +322,7 @@ public class HiveServer2 extends CompositeService {
   if (hiveConf.getBoolVar(ConfVars.HIVE_SE

[hive] 03/03: HIVE-23088: Using Strings from log4j breaks non-log4j users (David Lavati via Panagiotis Garefalakis, Zoltan Haindrich)

2020-04-23 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 014dafcd7ac4260f7038f969d7c8218682029b86
Author: David Lavati 
AuthorDate: Thu Apr 23 14:37:42 2020 +

HIVE-23088: Using Strings from log4j breaks non-log4j users (David Lavati 
via Panagiotis Garefalakis, Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../src/main/java/org/apache/hadoop/hive/ql/QTestMiniClusters.java  | 4 ++--
 .../src/main/java/org/apache/hadoop/hive/ql/QTestRunnerUtils.java   | 6 +++---
 .../java/org/apache/hadoop/hive/llap/log/LlapWrappedAppender.java   | 1 -
 ql/src/java/org/apache/hadoop/hive/ql/hooks/HookUtils.java  | 4 ++--
 4 files changed, 7 insertions(+), 8 deletions(-)

diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestMiniClusters.java 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestMiniClusters.java
index 997b35e..46e2f64 100644
--- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestMiniClusters.java
+++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestMiniClusters.java
@@ -37,6 +37,7 @@ import org.apache.avro.io.BinaryEncoder;
 import org.apache.avro.io.DatumWriter;
 import org.apache.avro.io.EncoderFactory;
 import org.apache.avro.specific.SpecificDatumWriter;
+import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.fs.FileSystem;
@@ -62,7 +63,6 @@ import org.apache.hive.druid.MiniDruidCluster;
 import org.apache.hive.kafka.SingleNodeKafkaCluster;
 import org.apache.hive.kafka.Wikipedia;
 import org.apache.hive.testutils.MiniZooKeeperCluster;
-import org.apache.logging.log4j.util.Strings;
 import org.apache.zookeeper.WatchedEvent;
 import org.apache.zookeeper.Watcher;
 import org.apache.zookeeper.ZooKeeper;
@@ -583,7 +583,7 @@ public class QTestMiniClusters {
 Path userInstallPath;
 if (isLocalFs) {
   String buildDir = QTestSystemProperties.getBuildDir();
-  Preconditions.checkState(Strings.isNotBlank(buildDir));
+  Preconditions.checkState(StringUtils.isNotBlank(buildDir));
   Path path = new Path(fsUriString, buildDir);
 
   // Create a fake fs root for local fs
diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestRunnerUtils.java 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestRunnerUtils.java
index 1026195..5fb138d 100644
--- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestRunnerUtils.java
+++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestRunnerUtils.java
@@ -20,8 +20,8 @@ package org.apache.hadoop.hive.ql;
 
 import java.io.File;
 
+import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.hive.ql.QTestMiniClusters.MiniClusterType;
-import org.apache.logging.log4j.util.Strings;
 
 public class QTestRunnerUtils {
   public static final String DEFAULT_INIT_SCRIPT = "q_test_init.sql";
@@ -104,7 +104,7 @@ public class QTestRunnerUtils {
 StringBuilder builder = new StringBuilder();
 builder.append("Test ").append(qfiles[i].getName())
 .append(" results check failed with error code 
").append(result.getReturnCode());
-if (Strings.isNotEmpty(result.getCapturedOutput())) {
+if (StringUtils.isNotEmpty(result.getCapturedOutput())) {
   builder.append(" and diff value 
").append(result.getCapturedOutput());
 }
 System.err.println(builder.toString());
@@ -155,7 +155,7 @@ public class QTestRunnerUtils {
 StringBuilder builder = new StringBuilder();
 builder.append("Test ").append(qfiles[i].getName())
 .append(" results check failed with error code 
").append(result.getReturnCode());
-if (Strings.isNotEmpty(result.getCapturedOutput())) {
+if (StringUtils.isNotEmpty(result.getCapturedOutput())) {
   builder.append(" and diff value 
").append(result.getCapturedOutput());
 }
 System.err.println(builder.toString());
diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/log/LlapWrappedAppender.java 
b/llap-server/src/java/org/apache/hadoop/hive/llap/log/LlapWrappedAppender.java
index 5cd6005..f35d244 100644
--- 
a/llap-server/src/java/org/apache/hadoop/hive/llap/log/LlapWrappedAppender.java
+++ 
b/llap-server/src/java/org/apache/hadoop/hive/llap/log/LlapWrappedAppender.java
@@ -23,7 +23,6 @@ import java.nio.file.Path;
 import java.nio.file.Paths;
 import java.util.concurrent.atomic.AtomicReference;
 
-import com.google.common.base.Preconditions;
 import org.apache.logging.log4j.core.Appender;
 import org.apache.logging.log4j.core.LogEvent;
 import org.apache.logging.log4j.core.appender.AbstractAppender;
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/hooks/HookUtils.java 
b/ql/src/java/org/apache/h

[hive] 02/03: HIVE-23164: Server is not properly terminated because of non-daemon threads (Eugene Chung via Zoltan Haindrich)

2020-04-23 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 3cadd2ac61b08b81390907f4b4380396e3a99ad5
Author: Eugene Chung 
AuthorDate: Thu Apr 23 14:35:47 2020 +

HIVE-23164: Server is not properly terminated because of non-daemon threads 
(Eugene Chung via Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../hadoop/hive/ql/exec/tez/PerPoolTriggerValidatorRunnable.java  | 6 +-
 .../main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java | 1 +
 .../main/java/org/apache/hadoop/hive/metastore/ThreadPool.java| 8 ++--
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/PerPoolTriggerValidatorRunnable.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/PerPoolTriggerValidatorRunnable.java
index 8f29197..14a688e 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/PerPoolTriggerValidatorRunnable.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/PerPoolTriggerValidatorRunnable.java
@@ -20,8 +20,10 @@ import java.util.HashMap;
 import java.util.Map;
 import java.util.concurrent.Executors;
 import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadFactory;
 import java.util.concurrent.TimeUnit;
 
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
 import org.apache.hadoop.hive.ql.wm.SessionTriggerProvider;
 import org.apache.hadoop.hive.ql.wm.TriggerActionHandler;
 import org.slf4j.Logger;
@@ -46,8 +48,10 @@ public class PerPoolTriggerValidatorRunnable implements 
Runnable {
   @Override
   public void run() {
 try {
+  ThreadFactory threadFactory = new ThreadFactoryBuilder().setDaemon(true)
+  .setNameFormat("PoolValidator %d").build();
   ScheduledExecutorService validatorExecutorService = Executors
-.newScheduledThreadPool(sessionTriggerProviders.size());
+  .newScheduledThreadPool(sessionTriggerProviders.size(), 
threadFactory);
   for (Map.Entry entry : 
sessionTriggerProviders.entrySet()) {
 String poolName = entry.getKey();
 if (!poolValidators.containsKey(poolName)) {
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
index 77d3404..32494ae 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
@@ -1115,6 +1115,7 @@ public class HiveMetaStore extends ThriftHiveMetastore {
 public void shutdown() {
   cleanupRawStore();
   PerfLogger.getPerfLogger(false).cleanupPerfLogMetrics();
+  ThreadPool.shutdown();
 }
 
 @Override
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ThreadPool.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ThreadPool.java
index d0fcd25..5dca2b3 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ThreadPool.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ThreadPool.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hive.metastore;
 
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 import org.slf4j.Logger;
@@ -24,6 +25,7 @@ import org.slf4j.LoggerFactory;
 
 import java.util.concurrent.Executors;
 import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadFactory;
 
 /**
  * Utility singleton class to manage all the threads.
@@ -31,7 +33,7 @@ import java.util.concurrent.ScheduledExecutorService;
 public class ThreadPool {
 
   static final private Logger LOG = LoggerFactory.getLogger(ThreadPool.class);
-  private static ThreadPool self = null;
+  private static ThreadPool self;
   private static ScheduledExecutorService pool;
 
   public static synchronized ThreadPool initialize(Configuration conf) {
@@ -43,8 +45,10 @@ public class ThreadPool {
   }
 
   private ThreadPool(Configuration conf) {
+ThreadFactory threadFactory = new ThreadFactoryBuilder().setDaemon(true)
+.setNameFormat("Metastore Scheduled Worker %d").build();
 pool = Executors.newScheduledThreadPool(MetastoreConf.getIntVar(conf,
-MetastoreConf.ConfVars.THREAD_POOL_SIZE));
+MetastoreConf.ConfVars.THREAD_POOL_SIZE), threadFactory);
   }
 
   public static ScheduledExecutorService getPool() {



[hive] 01/03: HIVE-23220: PostExecOrcFileDump listing order may depend on the underlying filesystem (Zoltan Haindrich reviewed by Miklos Gergely)

2020-04-23 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 65dc6cab9544badfb9a117d2a4ce9b8f5e0864f5
Author: Zoltan Haindrich 
AuthorDate: Thu Apr 23 14:35:43 2020 +

HIVE-23220: PostExecOrcFileDump listing order may depend on the underlying 
filesystem (Zoltan Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../hadoop/hive/ql/hooks/PostExecOrcFileDump.java  |   3 +
 .../llap/acid_bloom_filter_orc_file_dump.q.out | 156 ++---
 2 files changed, 81 insertions(+), 78 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecOrcFileDump.java 
b/ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecOrcFileDump.java
index 87c3db2..ecda606 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecOrcFileDump.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecOrcFileDump.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.hive.ql.hooks;
 
 import java.io.IOException;
 import java.io.PrintStream;
+import java.util.Collections;
 import java.util.List;
 
 import org.slf4j.Logger;
@@ -101,6 +102,8 @@ public class PostExecOrcFileDump implements 
ExecuteWithHookContext {
 List fileList = HdfsUtils.listLocatedStatus(fs, dir,
 hiddenFileFilter);
 
+Collections.sort(fileList);
+
 for (FileStatus fileStatus : fileList) {
   if (fileStatus.isDirectory()) {
 
diff --git 
a/ql/src/test/results/clientpositive/llap/acid_bloom_filter_orc_file_dump.q.out 
b/ql/src/test/results/clientpositive/llap/acid_bloom_filter_orc_file_dump.q.out
index da805b0..28fccd6 100644
--- 
a/ql/src/test/results/clientpositive/llap/acid_bloom_filter_orc_file_dump.q.out
+++ 
b/ql/src/test/results/clientpositive/llap/acid_bloom_filter_orc_file_dump.q.out
@@ -87,31 +87,31 @@ Stripe Statistics:
   Stripe 1:
 Column 0: count: 1 hasNull: false
 Column 1: count: 1 hasNull: false bytesOnDisk: 6 min: 0 max: 0 sum: 0
-Column 2: count: 1 hasNull: false bytesOnDisk: 6 min: 2 max: 2 sum: 2
+Column 2: count: 1 hasNull: false bytesOnDisk: 6 min: 1 max: 1 sum: 1
 Column 3: count: 1 hasNull: false bytesOnDisk: 9 min: 536870912 max: 
536870912 sum: 536870912
 Column 4: count: 1 hasNull: false bytesOnDisk: 6 min: 0 max: 0 sum: 0
-Column 5: count: 1 hasNull: false bytesOnDisk: 6 min: 2 max: 2 sum: 2
+Column 5: count: 1 hasNull: false bytesOnDisk: 6 min: 1 max: 1 sum: 1
 Column 6: count: 1 hasNull: false
-Column 7: count: 1 hasNull: false bytesOnDisk: 13 min: 2345 max: 2345 sum: 
4
-Column 8: count: 1 hasNull: false bytesOnDisk: 13 min: 2345 max: 2345 sum: 
4
-Column 9: count: 1 hasNull: false bytesOnDisk: 7 min: 2345 max: 2345 sum: 
2345
-Column 10: count: 1 hasNull: false bytesOnDisk: 7 min: 2345 max: 2345 sum: 
2345
+Column 7: count: 1 hasNull: false bytesOnDisk: 14 min: 12345 max: 12345 
sum: 5
+Column 8: count: 1 hasNull: false bytesOnDisk: 14 min: 12345 max: 12345 
sum: 5
+Column 9: count: 1 hasNull: false bytesOnDisk: 7 min: 12345 max: 12345 
sum: 12345
+Column 10: count: 1 hasNull: false bytesOnDisk: 7 min: 12345 max: 12345 
sum: 12345
 
 File Statistics:
   Column 0: count: 1 hasNull: false
   Column 1: count: 1 hasNull: false bytesOnDisk: 6 min: 0 max: 0 sum: 0
-  Column 2: count: 1 hasNull: false bytesOnDisk: 6 min: 2 max: 2 sum: 2
+  Column 2: count: 1 hasNull: false bytesOnDisk: 6 min: 1 max: 1 sum: 1
   Column 3: count: 1 hasNull: false bytesOnDisk: 9 min: 536870912 max: 
536870912 sum: 536870912
   Column 4: count: 1 hasNull: false bytesOnDisk: 6 min: 0 max: 0 sum: 0
-  Column 5: count: 1 hasNull: false bytesOnDisk: 6 min: 2 max: 2 sum: 2
+  Column 5: count: 1 hasNull: false bytesOnDisk: 6 min: 1 max: 1 sum: 1
   Column 6: count: 1 hasNull: false
-  Column 7: count: 1 hasNull: false bytesOnDisk: 13 min: 2345 max: 2345 sum: 4
-  Column 8: count: 1 hasNull: false bytesOnDisk: 13 min: 2345 max: 2345 sum: 4
-  Column 9: count: 1 hasNull: false bytesOnDisk: 7 min: 2345 max: 2345 sum: 
2345
-  Column 10: count: 1 hasNull: false bytesOnDisk: 7 min: 2345 max: 2345 sum: 
2345
+  Column 7: count: 1 hasNull: false bytesOnDisk: 14 min: 12345 max: 12345 sum: 
5
+  Column 8: count: 1 hasNull: false bytesOnDisk: 14 min: 12345 max: 12345 sum: 
5
+  Column 9: count: 1 hasNull: false bytesOnDisk: 7 min: 12345 max: 12345 sum: 
12345
+  Column 10: count: 1 hasNull: false bytesOnDisk: 7 min: 12345 max: 12345 sum: 
12345
 
 Stripes:
-  Stripe: offset: 3 data: 73 rows: 1 tail: 103 index: 595
+  Stripe: offset: 3 data: 75 rows: 1 tail: 100 index: 597
 Stream: column 0 section ROW_INDEX start: 3 length 11
 Stream: column 1 section ROW_INDEX start: 14 length 24
 Stream: column 2 section ROW_INDEX start: 38 length 24
@@ -119,24 +119,24 @@ Stripes:
 Stream: column 4 section ROW_INDEX start: 91 length 24
 Stream: column 5 section ROW_INDEX start: 115 length 24

[hive] branch master updated (9299512 -> 014dafc)

2020-04-23 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 9299512  HIVE-23103: Oracle statement batching (Peter Vary reviewed by 
Marton Bod, Denys Kuzmenko)
 new 65dc6ca  HIVE-23220: PostExecOrcFileDump listing order may depend on 
the underlying filesystem (Zoltan Haindrich reviewed by Miklos Gergely)
 new 3cadd2a  HIVE-23164: Server is not properly terminated because of 
non-daemon threads (Eugene Chung via Zoltan Haindrich)
 new 014dafc  HIVE-23088: Using Strings from log4j breaks non-log4j users 
(David Lavati via Panagiotis Garefalakis, Zoltan Haindrich)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/hadoop/hive/ql/QTestMiniClusters.java   |   4 +-
 .../apache/hadoop/hive/ql/QTestRunnerUtils.java|   6 +-
 .../hadoop/hive/llap/log/LlapWrappedAppender.java  |   1 -
 .../exec/tez/PerPoolTriggerValidatorRunnable.java  |   6 +-
 .../org/apache/hadoop/hive/ql/hooks/HookUtils.java |   4 +-
 .../hadoop/hive/ql/hooks/PostExecOrcFileDump.java  |   3 +
 .../llap/acid_bloom_filter_orc_file_dump.q.out | 156 ++---
 .../hadoop/hive/metastore/HiveMetaStore.java   |   1 +
 .../apache/hadoop/hive/metastore/ThreadPool.java   |   8 +-
 9 files changed, 100 insertions(+), 89 deletions(-)



[hive] branch master updated (8b9fadb -> c891fc5)

2020-04-22 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 8b9fadb  HIVE-23169 : Probe runtime support for LLAP (Panagiotis 
Garefalakis via Ashutosh Chauhan)
 new 998be10  HIVE-23246: Reduce MiniDruidCluster memory requeirements 
(Zoltan Haindrich reviewed by Peter Vary)
 new 1186843  HIVE-23249: Prevent infinite loop in 
TestJdbcWithMiniLlapArrow (Zoltan Haindrich reviewed by Peter Vary)
 new 4ef051c  HIVE-23250: Scheduled query related qtests may not finish 
before it's expected (Zoltan Haindrich reviewed by Peter Vary)
 new c891fc5  HIVE-23251: Provide a way to have only a selection of 
datasets loaded (Zoltan Haindrich reviewed by László Bodor)

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hive/jdbc/TestJdbcWithMiniLlapArrow.java   |   3 +-
 .../org/apache/hive/druid/MiniDruidCluster.java|  20 +-
 .../hive/ql/dataset/QTestDatasetHandler.java   |  72 --
 .../hive/ql/schq/TestScheduledQueryStatements.java |   5 +-
 .../test/queries/clientpositive/authorization_9.q  |   1 +
 ql/src/test/queries/clientpositive/schq_analyze.q  |   2 +-
 ql/src/test/queries/clientpositive/schq_ingest.q   |   2 +-
 .../queries/clientpositive/schq_materialized.q |   2 +-
 ql/src/test/queries/clientpositive/sysdb.q |   2 +-
 .../test/results/clientpositive/llap/sysdb.q.out   | 254 +++--
 10 files changed, 107 insertions(+), 256 deletions(-)



[hive] 01/04: HIVE-23246: Reduce MiniDruidCluster memory requeirements (Zoltan Haindrich reviewed by Peter Vary)

2020-04-22 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 998be10e21aa1de58b9d0f48940bc216ba66dbde
Author: Zoltan Haindrich 
AuthorDate: Wed Apr 22 08:06:48 2020 +

HIVE-23246: Reduce MiniDruidCluster memory requeirements (Zoltan Haindrich 
reviewed by Peter Vary)

Signed-off-by: Zoltan Haindrich 
---
 .../java/org/apache/hive/druid/MiniDruidCluster.java | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git 
a/itests/qtest-druid/src/main/java/org/apache/hive/druid/MiniDruidCluster.java 
b/itests/qtest-druid/src/main/java/org/apache/hive/druid/MiniDruidCluster.java
index 8081595..0fb63ce 100644
--- 
a/itests/qtest-druid/src/main/java/org/apache/hive/druid/MiniDruidCluster.java
+++ 
b/itests/qtest-druid/src/main/java/org/apache/hive/druid/MiniDruidCluster.java
@@ -60,7 +60,7 @@ public class MiniDruidCluster extends AbstractService {
   "druid.storage.type",
   "hdfs",
   "druid.processing.buffer.sizeBytes",
-  "213870912",
+  "10485760",
   "druid.processing.numThreads",
   "2",
   "druid.worker.capacity",
@@ -72,16 +72,14 @@ public class MiniDruidCluster extends AbstractService {
 
   private static final Map
   COMMON_COORDINATOR_INDEXER =
-  ImmutableMap.of("druid.indexer.logs.type",
-  "file",
-  "druid.coordinator.asOverlord.enabled",
-  "true",
-  "druid.coordinator.asOverlord.overlordService",
-  "druid/overlord",
-  "druid.coordinator.period",
-  "PT2S",
-  "druid.manager.segments.pollDuration",
-  "PT2S");
+  ImmutableMap.builder()
+  .put("druid.indexer.logs.type", "file")
+  .put("druid.coordinator.asOverlord.enabled", "true")
+  .put("druid.coordinator.asOverlord.overlordService", 
"druid/overlord")
+  .put("druid.coordinator.period", "PT2S")
+  .put("druid.manager.segments.pollDuration", "PT2S")
+  .put("druid.indexer.runner.javaOpts", "-Xmx512m")
+  .build();
   private static final int MIN_PORT_NUMBER = 6;
   private static final int MAX_PORT_NUMBER = 65535;
 



[hive] 02/04: HIVE-23249: Prevent infinite loop in TestJdbcWithMiniLlapArrow (Zoltan Haindrich reviewed by Peter Vary)

2020-04-22 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 1186843bbbaa079c22103198f1ac36edfd1fd9a1
Author: Zoltan Haindrich 
AuthorDate: Wed Apr 22 08:06:53 2020 +

HIVE-23249: Prevent infinite loop in TestJdbcWithMiniLlapArrow (Zoltan 
Haindrich reviewed by Peter Vary)

Signed-off-by: Zoltan Haindrich 
---
 .../src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java  | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java
 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java
index 1aab03d..bc2480a 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlapArrow.java
@@ -358,7 +358,7 @@ public class TestJdbcWithMiniLlapArrow extends 
BaseJdbcWithMiniLlap {
 
 // wait for other thread to create the stmt handle
 int count = 0;
-while (count < 10) {
+while (++count <= 10) {
   try {
 tKillHolder.throwable = null;
 Thread.sleep(2000);
@@ -380,7 +380,6 @@ public class TestJdbcWithMiniLlapArrow extends 
BaseJdbcWithMiniLlap {
 stmt2.close();
 break;
   } catch (SQLException e) {
-count++;
 LOG.warn("Exception when kill query", e);
 tKillHolder.throwable = e;
   }



[hive] 03/04: HIVE-23250: Scheduled query related qtests may not finish before it's expected (Zoltan Haindrich reviewed by Peter Vary)

2020-04-22 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 4ef051c8f6a654cc78352a1250b2d80410fa2b37
Author: Zoltan Haindrich 
AuthorDate: Wed Apr 22 08:11:15 2020 +

HIVE-23250: Scheduled query related qtests may not finish before it's 
expected (Zoltan Haindrich reviewed by Peter Vary)

Signed-off-by: Zoltan Haindrich 
---
 .../org/apache/hadoop/hive/ql/schq/TestScheduledQueryStatements.java | 5 -
 ql/src/test/queries/clientpositive/schq_analyze.q| 2 +-
 ql/src/test/queries/clientpositive/schq_ingest.q | 2 +-
 ql/src/test/queries/clientpositive/schq_materialized.q   | 2 +-
 4 files changed, 7 insertions(+), 4 deletions(-)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/schq/TestScheduledQueryStatements.java 
b/ql/src/test/org/apache/hadoop/hive/ql/schq/TestScheduledQueryStatements.java
index f2fc421..4f7990f 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/schq/TestScheduledQueryStatements.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/schq/TestScheduledQueryStatements.java
@@ -205,6 +205,9 @@ public class TestScheduledQueryStatements {
 
   @Test
   public void testExecuteImmediate() throws ParseException, Exception {
+// use a different namespace because the schq executor might be able to
+// catch the new schq execution immediately
+
env_setup.getTestCtx().hiveConf.setVar(ConfVars.HIVE_SCHEDULED_QUERIES_NAMESPACE,
 "immed");
 IDriver driver = createDriver();
 
 driver.run("set role admin");
@@ -213,7 +216,7 @@ public class TestScheduledQueryStatements {
 driver.run("alter scheduled query immed execute");
 
 try (CloseableObjectStore os = new 
CloseableObjectStore(env_setup.getTestCtx().hiveConf)) {
-  Optional sq = os.getMScheduledQuery(new 
ScheduledQueryKey("immed", "hive"));
+  Optional sq = os.getMScheduledQuery(new 
ScheduledQueryKey("immed", "immed"));
   assertTrue(sq.isPresent());
   assertThat(sq.get().getNextExecution(), Matchers.lessThanOrEqualTo((int) 
(System.currentTimeMillis() / 1000)));
   int cnt1 = ScheduledQueryExecutionService.getForcedScheduleCheckCount();
diff --git a/ql/src/test/queries/clientpositive/schq_analyze.q 
b/ql/src/test/queries/clientpositive/schq_analyze.q
index 3c03360..246a215 100644
--- a/ql/src/test/queries/clientpositive/schq_analyze.q
+++ b/ql/src/test/queries/clientpositive/schq_analyze.q
@@ -21,7 +21,7 @@ create scheduled query t_analyze cron '0 */1 * * * ? *' as 
analyze table t compu
 
 alter scheduled query t_analyze execute;
 
-!sleep 10; 
+!sleep 30;
  
 select * from information_schema.scheduled_executions s where 
schedule_name='ex_analyze' order by scheduled_execution_id desc limit 3;
  
diff --git a/ql/src/test/queries/clientpositive/schq_ingest.q 
b/ql/src/test/queries/clientpositive/schq_ingest.q
index b7bc90c..8ffc722 100644
--- a/ql/src/test/queries/clientpositive/schq_ingest.q
+++ b/ql/src/test/queries/clientpositive/schq_ingest.q
@@ -39,7 +39,7 @@ insert into s values(2,2),(3,3);
 -- pretend that a timeout have happened
 alter scheduled query ingest execute;
 
-!sleep 10;
+!sleep 30;
 select state,error_message from sys.scheduled_executions;
 
 select * from t order by id;
diff --git a/ql/src/test/queries/clientpositive/schq_materialized.q 
b/ql/src/test/queries/clientpositive/schq_materialized.q
index 7242f3e..46b725e 100644
--- a/ql/src/test/queries/clientpositive/schq_materialized.q
+++ b/ql/src/test/queries/clientpositive/schq_materialized.q
@@ -68,7 +68,7 @@ select `(NEXT_EXECUTION|SCHEDULED_QUERY_ID)?+.+` from 
sys.scheduled_queries;
 
 alter scheduled query d execute;
 
-!sleep 10;
+!sleep 30;
 
 -- the scheduled execution will fail - because of missing TXN; but overall it 
works..
 select state,error_message from sys.scheduled_executions;



[hive] 04/04: HIVE-23251: Provide a way to have only a selection of datasets loaded (Zoltan Haindrich reviewed by László Bodor)

2020-04-22 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit c891fc594ddbd73994381c629cb5ca67555f7332
Author: Zoltan Haindrich 
AuthorDate: Wed Apr 22 08:21:35 2020 +

HIVE-23251: Provide a way to have only a selection of datasets loaded 
(Zoltan Haindrich reviewed by László Bodor)

Signed-off-by: Zoltan Haindrich 
---
 .../hive/ql/dataset/QTestDatasetHandler.java   |  72 --
 .../test/queries/clientpositive/authorization_9.q  |   1 +
 ql/src/test/queries/clientpositive/sysdb.q |   2 +-
 .../test/results/clientpositive/llap/sysdb.q.out   | 254 +++--
 4 files changed, 90 insertions(+), 239 deletions(-)

diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/dataset/QTestDatasetHandler.java
 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/dataset/QTestDatasetHandler.java
index 85ece49..24748fc 100644
--- 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/dataset/QTestDatasetHandler.java
+++ 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/dataset/QTestDatasetHandler.java
@@ -42,6 +42,7 @@ import org.slf4j.LoggerFactory;
  *
  * 
  * --! qt:dataset:sample
+ * --! qt:dataset:sample:ONLY
  * 
  *
  * will make sure that the dataset named sample is loaded prior to executing 
the test.
@@ -52,6 +53,7 @@ public class QTestDatasetHandler implements 
QTestOptionHandler {
   private File datasetDir;
   private static Set srcTables;
   private static Set missingTables = new HashSet<>();
+  Set tablesToUnload = new HashSet<>();
 
   public QTestDatasetHandler(HiveConf conf) {
 // Use path relative to dataDir directory if it is not specified
@@ -90,6 +92,17 @@ public class QTestDatasetHandler implements 
QTestOptionHandler {
 return true;
   }
 
+  public boolean unloadDataset(String table, CliDriver cliDriver) throws 
Exception {
+try {
+  CommandProcessorResponse result = cliDriver.processLine("drop table " + 
table);
+  LOG.info("Result from cliDrriver.processLine in initFromDatasets=" + 
result);
+} catch (CommandProcessorException e) {
+  Assert.fail("Failed during initFromDatasets processLine with code=" + e);
+}
+
+return true;
+  }
+
   public static Set getSrcTables() {
 if (srcTables == null) {
   initSrcTables();
@@ -102,6 +115,11 @@ public class QTestDatasetHandler implements 
QTestOptionHandler {
 storeSrcTables();
   }
 
+  private void removeSrcTable(String table) {
+srcTables.remove(table);
+storeSrcTables();
+  }
+
   public static Set initSrcTables() {
 if (srcTables == null) {
   initSrcTablesFromSystemProperty();
@@ -133,33 +151,53 @@ public class QTestDatasetHandler implements 
QTestOptionHandler {
 
   @Override
   public void processArguments(String arguments) {
-String[] tables = arguments.split(",");
+String[] args = arguments.split(":");
+Set tableNames = getTableNames(args[0]);
 synchronized (QTestUtil.class) {
-  for (String string : tables) {
-string = string.trim();
-if (string.length() == 0) {
-  continue;
-}
-if (srcTables == null || !srcTables.contains(string)) {
-  missingTables.add(string);
+  if (args.length > 1) {
+if (args.length > 2 || !args[1].equalsIgnoreCase("ONLY")) {
+  throw new RuntimeException("unknown option: " + args[1]);
 }
+tablesToUnload.addAll(getSrcTables());
+tablesToUnload.removeAll(tableNames);
   }
+  tableNames.removeAll(getSrcTables());
+  missingTables.addAll(tableNames);
 }
   }
 
+  private Set getTableNames(String arguments) {
+Set ret = new HashSet();
+String[] tables = arguments.split(",");
+for (String string : tables) {
+  string = string.trim();
+  if (string.length() == 0) {
+continue;
+  }
+  ret.add(string);
+}
+return ret;
+  }
+
   @Override
   public void beforeTest(QTestUtil qt) throws Exception {
-if (!missingTables.isEmpty()) {
-  synchronized (QTestUtil.class) {
-qt.newSession(true);
-for (String table : missingTables) {
-  if (initDataset(table, qt.getCliDriver())) {
-addSrcTable(table);
-  }
+if (missingTables.isEmpty() && tablesToUnload.isEmpty()) {
+  return;
+}
+synchronized (QTestUtil.class) {
+  qt.newSession(true);
+  for (String table : missingTables) {
+if (initDataset(table, qt.getCliDriver())) {
+  addSrcTable(table);
 }
-missingTables.clear();
-qt.newSession(true);
   }
+  for (String table : tablesToUnload) {
+removeSrcTable(table);
+unloadDataset(table, qt.getCliDriver());
+  }
+  missingTables.clear();
+  tablesToUnload.clear();
+  qt.newSes

[hive] 02/02: HIVE-23247: Increase timeout for some tez tests (Zoltan Haindrich reviewed by Peter Vary)

2020-04-21 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit cd1ab41b3e17df761a7bda665ac64fba594647a0
Author: Zoltan Haindrich 
AuthorDate: Tue Apr 21 09:59:14 2020 +

HIVE-23247: Increase timeout for some tez tests (Zoltan Haindrich reviewed 
by Peter Vary)

Signed-off-by: Zoltan Haindrich 
---
 .../hive/ql/exec/tez/TestCustomPartitionVertex.java|  2 +-
 .../hive/ql/exec/tez/TestDynamicPartitionPruner.java   | 18 +-
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestCustomPartitionVertex.java 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestCustomPartitionVertex.java
index 183073a..2be7fbc 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestCustomPartitionVertex.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestCustomPartitionVertex.java
@@ -33,7 +33,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 public class TestCustomPartitionVertex {
-@Test(timeout = 5000)
+@Test(timeout = 2)
 public void testGetBytePayload() throws IOException {
 int numBuckets = 10;
 VertexManagerPluginContext context = 
mock(VertexManagerPluginContext.class);
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestDynamicPartitionPruner.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestDynamicPartitionPruner.java
index 080ee11..d38691e 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestDynamicPartitionPruner.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestDynamicPartitionPruner.java
@@ -40,7 +40,7 @@ import org.junit.Test;
 
 public class TestDynamicPartitionPruner {
 
-  @Test(timeout = 5000)
+  @Test(timeout = 2)
   public void testNoPruning() throws InterruptedException, IOException, 
HiveException,
   SerDeException {
 InputInitializerContext mockInitContext = 
mock(InputInitializerContext.class);
@@ -61,7 +61,7 @@ public class TestDynamicPartitionPruner {
 }
   }
 
-  @Test(timeout = 5000)
+  @Test(timeout = 2)
   public void testSingleSourceOrdering1() throws InterruptedException, 
IOException, HiveException,
   SerDeException {
 InputInitializerContext mockInitContext = 
mock(InputInitializerContext.class);
@@ -93,7 +93,7 @@ public class TestDynamicPartitionPruner {
 }
   }
 
-  @Test(timeout = 5000)
+  @Test(timeout = 2)
   public void testSingleSourceOrdering2() throws InterruptedException, 
IOException, HiveException,
   SerDeException {
 InputInitializerContext mockInitContext = 
mock(InputInitializerContext.class);
@@ -125,7 +125,7 @@ public class TestDynamicPartitionPruner {
 }
   }
 
-  @Test(timeout = 5000)
+  @Test(timeout = 2)
   public void testSingleSourceMultipleFiltersOrdering1() throws 
InterruptedException, SerDeException {
 InputInitializerContext mockInitContext = 
mock(InputInitializerContext.class);
 doReturn(2).when(mockInitContext).getVertexNumTasks("v1");
@@ -158,7 +158,7 @@ public class TestDynamicPartitionPruner {
 }
   }
 
-  @Test(timeout = 5000)
+  @Test(timeout = 2)
   public void testSingleSourceMultipleFiltersOrdering2() throws 
InterruptedException, SerDeException {
 InputInitializerContext mockInitContext = 
mock(InputInitializerContext.class);
 doReturn(2).when(mockInitContext).getVertexNumTasks("v1");
@@ -191,7 +191,7 @@ public class TestDynamicPartitionPruner {
 }
   }
 
-  @Test(timeout = 5000)
+  @Test(timeout = 2)
   public void testMultipleSourcesOrdering1() throws InterruptedException, 
SerDeException {
 InputInitializerContext mockInitContext = 
mock(InputInitializerContext.class);
 doReturn(2).when(mockInitContext).getVertexNumTasks("v1");
@@ -235,7 +235,7 @@ public class TestDynamicPartitionPruner {
 }
   }
 
-  @Test(timeout = 5000)
+  @Test(timeout = 2)
   public void testMultipleSourcesOrdering2() throws InterruptedException, 
SerDeException {
 InputInitializerContext mockInitContext = 
mock(InputInitializerContext.class);
 doReturn(2).when(mockInitContext).getVertexNumTasks("v1");
@@ -279,7 +279,7 @@ public class TestDynamicPartitionPruner {
 }
   }
 
-  @Test(timeout = 5000)
+  @Test(timeout = 2)
   public void testMultipleSourcesOrdering3() throws InterruptedException, 
SerDeException {
 InputInitializerContext mockInitContext = 
mock(InputInitializerContext.class);
 doReturn(2).when(mockInitContext).getVertexNumTasks("v1");
@@ -322,7 +322,7 @@ public class TestDynamicPartitionPruner {
 }
   }
 
-  @Test(timeout = 5000, expected = IllegalStateException.class)
+  @Test(timeout = 2, expected = IllegalStateException.class)
   public void testExtraEvents() throws InterruptedException, IOException, 
HiveException,
   SerDeExcept

[hive] branch master updated (a41e99a -> cd1ab41)

2020-04-21 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from a41e99a  HIVE-23210: Fix shortestjobcomparator when jobs submitted 
have 1 task their vertices (Panagiotis Garefalakis via Rajesh Balamohan)
 new 82139aa  HIVE-23248: avro-mapred should not pull in org.mortbay.jetty 
(Zoltan Haindrich reviewed by Peter Vary)
 new cd1ab41  HIVE-23247: Increase timeout for some tez tests (Zoltan 
Haindrich reviewed by Peter Vary)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../llap/daemon/services/impl/TestLlapWebServices.java | 16 
 pom.xml| 10 ++
 .../hive/ql/exec/tez/TestCustomPartitionVertex.java|  2 +-
 .../hive/ql/exec/tez/TestDynamicPartitionPruner.java   | 18 +-
 4 files changed, 36 insertions(+), 10 deletions(-)



[hive] 01/02: HIVE-23248: avro-mapred should not pull in org.mortbay.jetty (Zoltan Haindrich reviewed by Peter Vary)

2020-04-21 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 82139aaad08dd6627e8e7d9ea3d34ee9a5b7f8e7
Author: Zoltan Haindrich 
AuthorDate: Tue Apr 21 09:59:11 2020 +

HIVE-23248: avro-mapred should not pull in org.mortbay.jetty (Zoltan 
Haindrich reviewed by Peter Vary)

Signed-off-by: Zoltan Haindrich 
---
 .../llap/daemon/services/impl/TestLlapWebServices.java   | 16 
 pom.xml  | 10 ++
 2 files changed, 26 insertions(+)

diff --git 
a/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/services/impl/TestLlapWebServices.java
 
b/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/services/impl/TestLlapWebServices.java
index 5df6ea8..f000dad 100644
--- 
a/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/services/impl/TestLlapWebServices.java
+++ 
b/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/services/impl/TestLlapWebServices.java
@@ -26,6 +26,9 @@ import java.io.IOException;
 import java.io.StringWriter;
 import java.net.HttpURLConnection;
 import java.net.URL;
+import java.util.ArrayList;
+import java.util.Enumeration;
+import java.util.List;
 
 import com.google.common.collect.ImmutableSet;
 
@@ -46,6 +49,19 @@ public class TestLlapWebServices {
 llapWS.init(new HiveConf());
 llapWS.start();
 Thread.sleep(5000);
+ensureUniqueInClasspath("javax/servlet/http/HttpServletRequest.class");
+ensureUniqueInClasspath("javax/servlet/http/HttpServlet.class");
+  }
+
+  private static void ensureUniqueInClasspath(String name) throws IOException {
+Enumeration rr = 
TestLlapWebServices.class.getClassLoader().getResources(name);
+List found = new ArrayList<>();
+while (rr.hasMoreElements()) {
+  found.add(rr.nextElement());
+}
+if (found.size() != 1) {
+  throw new RuntimeException(name + " unexpected number of occurences on 
the classpath:" + found.toString());
+}
   }
 
   @Test
diff --git a/pom.xml b/pom.xml
index 2322957..b29c06c 100644
--- a/pom.xml
+++ b/pom.xml
@@ -485,6 +485,16 @@
 avro-mapred
 hadoop2
 ${avro.version}
+
+  
+org.mortbay.jetty
+jetty-util
+  
+  
+org.mortbay.jetty
+servlet-api
+  
+
  
   
 org.apache.derby



[hive] 02/02: HIVE-20728: Enable flaky test back: stat_estimate_related_col.q (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

2020-04-04 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 18d0b5a46c23056d3fe60032e00de4534a5be533
Author: Zoltan Haindrich 
AuthorDate: Sat Apr 4 18:52:12 2020 +

HIVE-20728: Enable flaky test back: stat_estimate_related_col.q (Zoltan 
Haindrich reviewed by Jesus Camacho Rodriguez)

Signed-off-by: Zoltan Haindrich 
---
 .../apache/hadoop/hive/cli/control/CliConfigs.java |  1 -
 .../clientpositive/stat_estimate_related_col.q |  6 ++
 .../clientpositive/stat_estimate_related_col.q.out | 76 ++
 3 files changed, 55 insertions(+), 28 deletions(-)

diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java 
b/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java
index cc74804..f12b786 100644
--- 
a/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java
+++ 
b/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java
@@ -64,7 +64,6 @@ public class CliConfigs {
 excludeQuery("udaf_context_ngrams.q"); // disabled in HIVE-20741
 excludeQuery("udaf_corr.q"); // disabled in HIVE-20741
 excludeQuery("udaf_histogram_numeric.q"); // disabled in HIVE-20715
-excludeQuery("stat_estimate_related_col.q"); // disabled in HIVE-20727
 excludeQuery("vector_groupby_reduce.q"); // Disabled in HIVE-21396
 
 setResultsDir("ql/src/test/results/clientpositive");
diff --git a/ql/src/test/queries/clientpositive/stat_estimate_related_col.q 
b/ql/src/test/queries/clientpositive/stat_estimate_related_col.q
index 54deb5b..5aa380f 100644
--- a/ql/src/test/queries/clientpositive/stat_estimate_related_col.q
+++ b/ql/src/test/queries/clientpositive/stat_estimate_related_col.q
@@ -1,5 +1,8 @@
 -- disable cbo because calcite can see thru these test cases; the goal here is 
to test the annotation processing
 set hive.cbo.enable=false;
+
+set 
hive.semantic.analyzer.hook=org.apache.hadoop.hive.ql.hooks.AccurateEstimatesCheckerHook;
+set accurate.estimate.checker.absolute.error=5;
  
 set hive.explain.user=true;
 set hive.strict.checks.cartesian.product=false;
@@ -36,6 +39,9 @@ explain analyze select count(*) from t8 ta, t8 tb where ta.a 
= tb.b and ta.a=3;
 explain analyze select sum(a) from t8 where b in 
(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50)
 and b=2 and b=2 and 2=b group by b;
 
 explain analyze select sum(a) from t8 where b=2 and (b = 1 or b=2) group by b;
+
+set accurate.estimate.checker.absolute.error=8;
+
 explain analyze select sum(a) from t8 where b=2 and (b = 1 or b=2) and (b=1 or 
b=3) group by b;
 
 explain analyze select sum(a) from t8 where
diff --git a/ql/src/test/results/clientpositive/stat_estimate_related_col.q.out 
b/ql/src/test/results/clientpositive/stat_estimate_related_col.q.out
index a041e51..8546612 100644
--- a/ql/src/test/results/clientpositive/stat_estimate_related_col.q.out
+++ b/ql/src/test/results/clientpositive/stat_estimate_related_col.q.out
@@ -93,18 +93,20 @@ STAGE PLANS:
   TableScan
 alias: t8
 filterExpr: (b) IN (2, 3) (type: boolean)
-Statistics: Num rows: 40/1 Data size: 320 Basic stats: COMPLETE 
Column stats: COMPLETE
+Statistics: Num rows: 40/40 Data size: 320 Basic stats: COMPLETE 
Column stats: COMPLETE
 Filter Operator
   predicate: (b) IN (2, 3) (type: boolean)
-  Statistics: Num rows: 16/1 Data size: 128 Basic stats: COMPLETE 
Column stats: COMPLETE
+  Statistics: Num rows: 16/16 Data size: 128 Basic stats: COMPLETE 
Column stats: COMPLETE
   Group By Operator
 aggregations: sum(a)
 keys: b (type: int)
+minReductionHashAggr: 0.99
 mode: hash
 outputColumnNames: _col0, _col1
-Statistics: Num rows: 2/1 Data size: 24 Basic stats: COMPLETE 
Column stats: COMPLETE
+Statistics: Num rows: 2/2 Data size: 24 Basic stats: COMPLETE 
Column stats: COMPLETE
 Reduce Output Operator
   key expressions: _col0 (type: int)
+  null sort order: z
   sort order: +
   Map-reduce partition columns: _col0 (type: int)
   Statistics: Num rows: 2/2 Data size: 24 Basic stats: 
COMPLETE Column stats: COMPLETE
@@ -162,22 +164,24 @@ STAGE PLANS:
   TableScan
 alias: t8
 filterExpr: (b = 2) (type: boolean)
-Statistics: Num rows: 40/1 Data size: 320 Basic stats: COMPLETE 
Column stats: COMPLETE
+Statistics: Num rows: 40/40 Data size: 320 Basic stats: COMPLETE 
Column stats: COMPLETE

[hive] branch master updated (d676dfb -> 18d0b5a)

2020-04-04 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from d676dfb  HIVE-23131 Remove 
ql/src/test/results/clientnegative/orc_type_promotion3_acid.q (Miklos Gergely, 
reviewed by Laszlo Bodor)
 new 216e73a  HIVE-23030: Enable sketch union-s to be rolled up (Zoltan 
Haindrich reviewed by Jesus Camacho Rodriguez)
 new 18d0b5a  HIVE-20728: Enable flaky test back: 
stat_estimate_related_col.q (Zoltan Haindrich reviewed by Jesus Camacho 
Rodriguez)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../test/resources/testconfiguration.properties|   1 +
 .../apache/hadoop/hive/cli/control/CliConfigs.java |   1 -
 .../hadoop/hive/ql/exec/DataSketchesFunctions.java | 397 ++---
 .../hadoop/hive/ql/exec/FunctionRegistry.java  |   2 +-
 .../org/apache/hadoop/hive/ql/exec/Registry.java   |  26 ++
 .../hive/ql/optimizer/calcite/HiveRelBuilder.java  |   5 +
 ...ggFunction.java => HiveMergeableAggregate.java} |  43 ++-
 .../calcite/functions/HiveSqlSumAggFunction.java   |   2 -
 .../calcite/translator/SqlFunctionConverter.java   |  21 +-
 .../org/apache/hive/plugin/api/HiveUDFPlugin.java  |  26 +-
 .../sketches_materialized_view_rollup.q|  32 ++
 .../clientpositive/stat_estimate_related_col.q |   6 +
 .../llap/sketches_materialized_view_rollup.q.out   | 187 ++
 .../clientpositive/stat_estimate_related_col.q.out |  76 ++--
 14 files changed, 627 insertions(+), 198 deletions(-)
 copy 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/functions/{HiveSqlMinMaxAggFunction.java
 => HiveMergeableAggregate.java} (58%)
 copy serde/src/java/org/apache/hadoop/hive/serde2/ColumnSet.java => 
ql/src/java/org/apache/hive/plugin/api/HiveUDFPlugin.java (71%)
 create mode 100644 
ql/src/test/queries/clientpositive/sketches_materialized_view_rollup.q
 create mode 100644 
ql/src/test/results/clientpositive/llap/sketches_materialized_view_rollup.q.out



[hive] 01/02: HIVE-23030: Enable sketch union-s to be rolled up (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

2020-04-04 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 216e73a7ddd58974e6b1151d1b8d0e26f5f69239
Author: Zoltan Haindrich 
AuthorDate: Sat Apr 4 18:51:39 2020 +

HIVE-23030: Enable sketch union-s to be rolled up (Zoltan Haindrich 
reviewed by Jesus Camacho Rodriguez)

Signed-off-by: Zoltan Haindrich 
---
 .../test/resources/testconfiguration.properties|   1 +
 .../hadoop/hive/ql/exec/DataSketchesFunctions.java | 397 ++---
 .../hadoop/hive/ql/exec/FunctionRegistry.java  |   2 +-
 .../org/apache/hadoop/hive/ql/exec/Registry.java   |  26 ++
 .../hive/ql/optimizer/calcite/HiveRelBuilder.java  |   5 +
 .../calcite/functions/HiveMergeableAggregate.java  |  66 
 .../calcite/functions/HiveSqlSumAggFunction.java   |   2 -
 .../calcite/translator/SqlFunctionConverter.java   |  21 +-
 .../org/apache/hive/plugin/api/HiveUDFPlugin.java  |  35 ++
 .../sketches_materialized_view_rollup.q|  32 ++
 .../llap/sketches_materialized_view_rollup.q.out   | 187 ++
 11 files changed, 634 insertions(+), 140 deletions(-)

diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index f54c96e..d2c9127 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -824,6 +824,7 @@ minillaplocal.query.files=\
   schq_ingest.q,\
   sketches_hll.q,\
   sketches_theta.q,\
+  sketches_materialized_view_rollup.q,\
   table_access_keys_stats.q,\
   temp_table_llap_partitioned.q,\
   tez_bmj_schema_evolution.q,\
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
index b9d265f..eec90c6 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
@@ -18,15 +18,35 @@
 
 package org.apache.hadoop.hive.ql.exec;
 
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import org.apache.calcite.rel.type.RelDataTypeImpl;
+import org.apache.calcite.rel.type.RelProtoDataType;
+import org.apache.calcite.sql.SqlFunction;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.type.InferTypes;
+import org.apache.calcite.sql.type.OperandTypes;
+import org.apache.calcite.sql.type.ReturnTypes;
+import org.apache.calcite.sql.type.SqlTypeName;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.functions.HiveMergeableAggregate;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFResolver2;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDTF;
+import org.apache.hive.plugin.api.HiveUDFPlugin;
 
 /**
  * Registers functions from the DataSketches library as builtin functions.
  *
  * In an effort to show a more consistent
  */
-public class DataSketchesFunctions {
+public final class DataSketchesFunctions implements HiveUDFPlugin {
+
+  public static final DataSketchesFunctions INSTANCE = new 
DataSketchesFunctions();
+
+  private static final String DATASKETCHES_PREFIX = "ds";
 
   private static final String DATA_TO_SKETCH = "sketch";
   private static final String SKETCH_TO_ESTIMATE_WITH_ERROR_BOUNDS = 
"estimate_bounds";
@@ -53,169 +73,276 @@ public class DataSketchesFunctions {
   private static final String SKETCH_TO_VARIANCES = "variances";
   private static final String SKETCH_TO_PERCENTILE = "percentile";
 
-  private final Registry system;
+  private final List sketchClasses;
+  private final ArrayList descriptors;
+
+  private DataSketchesFunctions() {
+this.sketchClasses = new ArrayList();
+this.descriptors = new ArrayList();
+registerHll();
+registerCpc();
+registerKll();
+registerTheta();
+registerTuple();
+registerQuantiles();
+registerFrequencies();
+
+buildCalciteFns();
+buildDescritors();
+  }
+
+  @Override
+  public Iterable getDescriptors() {
+return descriptors;
+  }
+
+  private void buildDescritors() {
+for (SketchDescriptor sketchDescriptor : sketchClasses) {
+  descriptors.addAll(sketchDescriptor.fnMap.values());
+}
+  }
+
+  private void buildCalciteFns() {
+for (SketchDescriptor sd : sketchClasses) {
+  // Mergability is exposed to Calcite; which enables to use it during 
rollup.
+  RelProtoDataType sketchType = RelDataTypeImpl.proto(SqlTypeName.BINARY, 
true);
+
+  SketchFunctionDescriptor sketchSFD = sd.fnMap.get(DATA_TO_SKETCH);
+  SketchFunctionDescriptor unionSFD = sd.fnMap.get(UNION_SKETCH);
+
+  if (sketchSFD == null || unionSFD == null) {
+continue;
+  }
+
+  HiveMergeableAggregate unionFn = new 
HiveMergeableAggregat

[hive] branch master updated: HIVE-22983: Fix the comments on ConstantPropagate (Zhihua Deng via Zoltan Haindrich)

2020-03-27 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 9efafef  HIVE-22983: Fix the comments on ConstantPropagate (Zhihua 
Deng via Zoltan Haindrich)
9efafef is described below

commit 9efafef0822bac0cf7bb5f146f847f1591a456a1
Author: Zhihua Deng 
AuthorDate: Fri Mar 27 08:11:33 2020 +

HIVE-22983: Fix the comments on ConstantPropagate (Zhihua Deng via Zoltan 
Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagate.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagate.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagate.java
index 47d9ec7..a040d7e 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagate.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagate.java
@@ -57,7 +57,7 @@ import org.apache.hadoop.hive.ql.parse.SemanticException;
  * some constants of its parameters.
  *
  * 3. Propagate expression: if the expression is an assignment like 
column=constant, the expression
- * will be propagate to parents to see if further folding operation is 
possible.
+ * will be propagate to children to see if further folding operation is 
possible.
  */
 public class ConstantPropagate extends Transform {
 
@@ -147,7 +147,7 @@ public class ConstantPropagate extends Transform {
   || getDispatchedList().containsAll(parents)) {
 opStack.push(nd);
 
-// all children are done or no need to walk the children
+// all parents are done or no need to walk the parents
 dispatch(nd, opStack);
 opStack.pop();
   } else {
@@ -157,7 +157,7 @@ public class ConstantPropagate extends Transform {
 return;
   }
 
-  // move all the children to the front of queue
+  // move all the children to the end of queue
   List children = nd.getChildren();
   if (children != null) {
 toWalk.removeAll(children);



[hive] branch master updated: HIVE-22940: Make the datasketches functions available as predefined functions (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

2020-03-23 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 2105c66  HIVE-22940: Make the datasketches functions available as 
predefined functions (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)
2105c66 is described below

commit 2105c6617ef9609dd8b2f712f596c2f9cc6d972e
Author: Zoltan Haindrich 
AuthorDate: Mon Mar 23 07:58:54 2020 +

HIVE-22940: Make the datasketches functions available as predefined 
functions (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

Signed-off-by: Zoltan Haindrich 
---
 .../test/resources/testconfiguration.properties|   2 +
 pom.xml|   1 +
 ql/pom.xml |  10 +
 .../hadoop/hive/ql/exec/DataSketchesFunctions.java | 221 +
 .../hadoop/hive/ql/exec/FunctionRegistry.java  |   3 +-
 ql/src/test/queries/clientpositive/sketches_hll.q  |  16 ++
 .../test/queries/clientpositive/sketches_theta.q   |  33 +++
 .../results/clientpositive/llap/sketches_hll.q.out |  59 ++
 .../clientpositive/llap/sketches_theta.q.out   | 120 +++
 .../results/clientpositive/show_functions.q.out| 136 +
 10 files changed, 599 insertions(+), 2 deletions(-)

diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index f71ed3d..3510016 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -818,6 +818,8 @@ minillaplocal.query.files=\
   schq_materialized.q,\
   schq_analyze.q,\
   schq_ingest.q,\
+  sketches_hll.q,\
+  sketches_theta.q,\
   table_access_keys_stats.q,\
   temp_table_llap_partitioned.q,\
   tez_bmj_schema_evolution.q,\
diff --git a/pom.xml b/pom.xml
index af70972..579e745 100644
--- a/pom.xml
+++ b/pom.xml
@@ -228,6 +228,7 @@
 2.4.0
 3.0.11
 1.23
+1.0.0-incubating
   
 
   
diff --git a/ql/pom.xml b/ql/pom.xml
index 161a527..9b45d31 100644
--- a/ql/pom.xml
+++ b/ql/pom.xml
@@ -313,6 +313,11 @@
   test
 
 
+   org.apache.datasketches
+   datasketches-hive
+   ${datasketches.version}
+
+
   com.lmax
   disruptor
   ${disruptor.version}
@@ -1007,6 +1012,7 @@
   io.dropwizard.metrics:metrics-jvm
   io.dropwizard.metrics:metrics-json
   com.zaxxer:HikariCP
+  org.apache.datasketches:*
   org.apache.calcite:*
   org.apache.calcite.avatica:avatica
 
@@ -1040,6 +1046,10 @@
   com.google.thirdparty.publicsuffix
   
org.apache.hive.com.google.thirdparty.publicsuffix
 
+
+  org.apache.datasketches
+  
org.apache.hive.org.apache.datasketches
+
   
 
   
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
new file mode 100644
index 000..b9d265f
--- /dev/null
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
@@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.exec;
+
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFResolver2;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDTF;
+
+/**
+ * Registers functions from the DataSketches library as builtin functions.
+ *
+ * In an effort to show a more consistent
+ */
+public class DataSketchesFunctions {
+
+  private static final String DATA_TO_SKETCH = "sketch";
+  private static final String SKETCH_TO_ESTIMATE_WITH_ERROR_BOUNDS = 
"estimate_bounds";
+  private static final String SKETCH_TO_ESTIMATE = "estimate";
+  private static final String SKETCH_TO_STRING = "stringify";
+  private static final String UNION_SKETCH = "union"

[hive] branch master updated: HIVE-22126: hive-exec packaging should shade guava (Eugene Chung via Ádám Szita, Zoltan Haindrich)

2020-03-22 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ef8446f  HIVE-22126: hive-exec packaging should shade guava (Eugene 
Chung via Ádám Szita, Zoltan Haindrich)
ef8446f is described below

commit ef8446f4431b77cce4447d7a7286dfa4ce46d33a
Author: Eugene Chung 
AuthorDate: Sun Mar 22 20:26:12 2020 +

HIVE-22126: hive-exec packaging should shade guava (Eugene Chung via Ádám 
Szita, Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../org/apache/hadoop/hive/ql/log/PerfLogger.java  |  4 +--
 itests/hive-blobstore/pom.xml  | 20 ++-
 itests/hive-minikdc/pom.xml|  8 -
 itests/hive-unit/pom.xml   | 18 +-
 itests/qtest-accumulo/pom.xml  | 40 --
 itests/qtest-kudu/pom.xml  | 34 --
 itests/qtest-spark/pom.xml | 14 +---
 pom.xml| 23 +
 ql/pom.xml | 26 --
 .../org/apache/hadoop/hive/ql/QueryDisplay.java|  7 ++--
 .../calcite/reloperators/HiveAggregate.java|  9 ++---
 .../calcite/rules/HiveRelDecorrelator.java |  2 +-
 .../calcite/rules/HiveSubQueryRemoveRule.java  |  8 ++---
 13 files changed, 119 insertions(+), 94 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java 
b/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java
index 2707987..f1181fd 100644
--- a/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java
+++ b/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java
@@ -218,11 +218,11 @@ public class PerfLogger {
   }
 
 
-  public ImmutableMap getStartTimes() {
+  public Map getStartTimes() {
 return ImmutableMap.copyOf(startTimes);
   }
 
-  public ImmutableMap getEndTimes() {
+  public Map getEndTimes() {
 return ImmutableMap.copyOf(endTimes);
   }
 
diff --git a/itests/hive-blobstore/pom.xml b/itests/hive-blobstore/pom.xml
index efc6b37..09955c5 100644
--- a/itests/hive-blobstore/pom.xml
+++ b/itests/hive-blobstore/pom.xml
@@ -49,10 +49,6 @@
   protobuf-java
 
 
-  org.apache.calcite
-  calcite-core
-
-
   org.apache.hive
   hive-common
   test
@@ -64,6 +60,11 @@
 
 
   org.apache.hive
+  hive-exec
+  test
+
+
+  org.apache.hive
   hive-standalone-metastore-common
   test
 
@@ -94,6 +95,12 @@
   org.apache.hive
   hive-it-util
   test
+  
+
+  org.apache.calcite
+  calcite-core
+
+  
 
 
   org.apache.hive
@@ -101,11 +108,6 @@
   test
 
 
-  org.apache.hive
-  hive-exec
-  test
-
-
   org.apache.hadoop
   hadoop-common
   test
diff --git a/itests/hive-minikdc/pom.xml b/itests/hive-minikdc/pom.xml
index f1328aa..22cf244 100644
--- a/itests/hive-minikdc/pom.xml
+++ b/itests/hive-minikdc/pom.xml
@@ -42,14 +42,6 @@
   protobuf-java
 
 
-  org.apache.calcite
-  calcite-core
-
-
-  org.apache.calcite
-  calcite-linq4j
-
-
   org.apache.hive
   hive-common
   test
diff --git a/itests/hive-unit/pom.xml b/itests/hive-unit/pom.xml
index bc20cd6..103975f 100644
--- a/itests/hive-unit/pom.xml
+++ b/itests/hive-unit/pom.xml
@@ -40,19 +40,24 @@
 
 
   org.apache.hive
-  hive-jdbc
+  hive-exec
 
 
   org.apache.hive
-  hive-jdbc-handler
+  hive-exec
+  tests
 
 
   org.apache.hive
-  hive-service
+  hive-jdbc
 
 
   org.apache.hive
-  hive-exec
+  hive-jdbc-handler
+
+
+  org.apache.hive
+  hive-service
 
 
   org.apache.hive
@@ -175,11 +180,6 @@
 
 
   org.apache.hive
-  hive-exec
-  tests
-
-
-  org.apache.hive
   hive-common
   tests
   test
diff --git a/itests/qtest-accumulo/pom.xml b/itests/qtest-accumulo/pom.xml
index b0373d5..a35d2a8 100644
--- a/itests/qtest-accumulo/pom.xml
+++ b/itests/qtest-accumulo/pom.xml
@@ -56,12 +56,18 @@
   org.apache.hive
   hive-contrib
   test
-  
-
-  org.apache.hive
-  hive-exec
-
-  
+
+
+  org.apache.hive
+  hive-exec
+  test
+  core
+
+
+  org.apache.hive
+  hive-exec
+  test
+  tests
 
 
   org.apache.hive
@@ -96,8 +102,8 @@
   test
   
 
-  org.apache.hive
-  hive-exec
+  org.apache.calcite
+  calcite-core
 
 
   org.apache.hive
@@ -115,24 +121,6 @@
   hive-udf
   test

[hive] branch master updated: HIVE-23035: Scheduled query executor may hang in case TezAMs are launched on-demand (Zoltan Haindrich reviewed by László Bodor)

2020-03-19 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 213ca2e  HIVE-23035: Scheduled query executor may hang in case TezAMs 
are launched on-demand (Zoltan Haindrich reviewed by László Bodor)
213ca2e is described below

commit 213ca2e3f0e4786b1eca1c4799818b21723d43a4
Author: Zoltan Haindrich 
AuthorDate: Thu Mar 19 14:34:16 2020 +

HIVE-23035: Scheduled query executor may hang in case TezAMs are launched 
on-demand (Zoltan Haindrich reviewed by László Bodor)

Signed-off-by: Zoltan Haindrich 
---
 .../apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java  | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
index 8443b3f..ca12093 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
@@ -225,6 +225,7 @@ public class ScheduledQueryExecutionService implements 
Closeable {
 conf.setVar(HiveConf.ConfVars.HIVE_AUTHENTICATOR_MANAGER, 
SessionStateUserAuthenticator.class.getName());
 conf.unset(HiveConf.ConfVars.HIVESESSIONID.varname);
 state = new SessionState(conf, q.getUser());
+state.setIsHiveServerQuery(true);
 SessionState.start(state);
 reportQueryProgress();
 try (



[hive] 02/03: HIVE-22539: HiveServer2 SPNEGO authentication should skip if authorization header is empty (Kevin Risden via Zoltan Haindrich)

2020-03-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit abc067abb9ace807f0106d896ce84e48967d3e9c
Author: Kevin Risden 
AuthorDate: Tue Mar 17 17:01:03 2020 +

HIVE-22539: HiveServer2 SPNEGO authentication should skip if authorization 
header is empty (Kevin Risden via Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../hive/service/cli/thrift/ThriftHttpServlet.java | 53 
 .../service/cli/thrift/ThriftHttpServletTest.java  | 71 ++
 2 files changed, 98 insertions(+), 26 deletions(-)

diff --git 
a/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java 
b/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java
index e2231c2..6eb2606 100644
--- a/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java
+++ b/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java
@@ -36,6 +36,7 @@ import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletResponse;
 import javax.ws.rs.core.NewCookie;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.io.ByteStreams;
 import org.apache.commons.codec.binary.Base64;
 import org.apache.commons.codec.binary.StringUtils;
@@ -45,7 +46,6 @@ import 
org.apache.hadoop.hive.shims.HadoopShims.KerberosNameShim;
 import org.apache.hadoop.hive.shims.ShimLoader;
 import org.apache.hadoop.hive.shims.Utils;
 import org.apache.hadoop.security.UserGroupInformation;
-import 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator;
 import org.apache.hive.service.CookieSigner;
 import org.apache.hive.service.auth.AuthenticationProviderFactory;
 import org.apache.hive.service.auth.AuthenticationProviderFactory.AuthMethods;
@@ -163,7 +163,7 @@ public class ThriftHttpServlet extends TServlet {
 List forwardedAddresses = 
Arrays.asList(forwarded_for.split(","));
 SessionManager.setForwardedAddresses(forwardedAddresses);
   } else {
-SessionManager.setForwardedAddresses(Collections.emptyList());
+SessionManager.setForwardedAddresses(Collections.emptyList());
   }
 
   // If the cookie based authentication is not enabled or the request does 
not have a valid
@@ -196,7 +196,7 @@ public class ThriftHttpServlet extends TServlet {
 String delegationToken = 
request.getHeader(HIVE_DELEGATION_TOKEN_HEADER);
 // Each http request must have an Authorization header
 if ((delegationToken != null) && (!delegationToken.isEmpty())) {
-  clientUserName = doTokenAuth(request, response);
+  clientUserName = doTokenAuth(request);
 } else {
   clientUserName = doKerberosAuth(request);
 }
@@ -319,12 +319,12 @@ public class ThriftHttpServlet extends TServlet {
* Each cookie is of the format [key]=[value]
*/
   private String toCookieStr(Cookie[] cookies) {
-   String cookieStr = "";
+StringBuilder cookieStr = new StringBuilder();
 
-   for (Cookie c : cookies) {
- cookieStr += c.getName() + "=" + c.getValue() + " ;\n";
+for (Cookie c : cookies) {
+  cookieStr.append(c.getName()).append('=').append(c.getValue()).append(" 
;\n");
 }
-return cookieStr;
+return cookieStr.toString();
   }
 
   /**
@@ -386,9 +386,9 @@ public class ThriftHttpServlet extends TServlet {
 
   /**
* Do the LDAP/PAM authentication
-   * @param request
-   * @param authType
-   * @throws HttpAuthenticationException
+   * @param request request to authenticate
+   * @param authType type of authentication
+   * @throws HttpAuthenticationException on error authenticating end user
*/
   private String doPasswdAuth(HttpServletRequest request, String authType)
   throws HttpAuthenticationException {
@@ -408,7 +408,7 @@ public class ThriftHttpServlet extends TServlet {
 return userName;
   }
 
-  private String doTokenAuth(HttpServletRequest request, HttpServletResponse 
response)
+  private String doTokenAuth(HttpServletRequest request)
   throws HttpAuthenticationException {
 String tokenStr = request.getHeader(HIVE_DELEGATION_TOKEN_HEADER);
 try {
@@ -424,18 +424,23 @@ public class ThriftHttpServlet extends TServlet {
* which GSS-API will extract information from.
* In case of a SPNego request we use the httpUGI,
* for the authenticating service tickets.
-   * @param request
-   * @return
-   * @throws HttpAuthenticationException
+   * @param request Request to act on
+   * @return client principal name
+   * @throws HttpAuthenticationException on error authenticating the user
*/
-  private String doKerberosAuth(HttpServletRequest request)
+  @VisibleForTesting
+  String doKerberosAuth(HttpServletRequest request)
   throws Ht

[hive] 03/03: HIVE-22841: ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner IllegalArgumentException on invalid cookie signature (Kevin Risden via Zoltan Haindrich)

2020-03-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 94b43f40353216127742eaf1c3604479a36c660f
Author: Kevin Risden 
AuthorDate: Tue Mar 17 17:01:17 2020 +

HIVE-22841: ThriftHttpServlet#getClientNameFromCookie should handle 
CookieSigner IllegalArgumentException on invalid cookie signature (Kevin Risden 
via Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../auth/TestHttpCookieAuthenticationTest.java | 185 +
 .../java/org/apache/hive/jdbc/HiveConnection.java  |   2 +-
 .../hive/jdbc/HttpRequestInterceptorBase.java  |   5 +-
 .../hive/service/cli/thrift/ThriftHttpServlet.java |   9 +-
 .../org/apache/hive/service/TestCookieSigner.java  |  53 +++--
 .../cli/thrift/ThriftCliServiceTestWithCookie.java | 231 -
 6 files changed, 231 insertions(+), 254 deletions(-)

diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestHttpCookieAuthenticationTest.java
 
b/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestHttpCookieAuthenticationTest.java
new file mode 100644
index 000..827cc68
--- /dev/null
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestHttpCookieAuthenticationTest.java
@@ -0,0 +1,185 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hive.service.auth;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.lang.reflect.Field;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hive.jdbc.HiveConnection;
+import org.apache.hive.jdbc.HiveDriver;
+import org.apache.hive.jdbc.miniHS2.MiniHS2;
+import org.apache.http.client.CookieStore;
+import org.apache.http.client.HttpClient;
+import org.apache.http.cookie.Cookie;
+import org.apache.http.impl.cookie.BasicClientCookie;
+import org.apache.thrift.transport.THttpClient;
+import org.apache.thrift.transport.TTransport;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * TestHttpCookieAuthenticationTest.
+ */
+public class TestHttpCookieAuthenticationTest {
+  private static MiniHS2 miniHS2;
+
+  @BeforeClass
+  public static void startServices() throws Exception {
+miniHS2 = new MiniHS2.Builder().withHTTPTransport().build();
+
+Map configOverlay = new HashMap<>();
+configOverlay.put(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY.varname, 
Boolean.FALSE.toString());
+
configOverlay.put(HiveConf.ConfVars.HIVE_SERVER2_THRIFT_HTTP_COOKIE_AUTH_ENABLED.varname,
 Boolean.TRUE.toString());
+miniHS2.start(configOverlay);
+  }
+
+  @AfterClass
+  public static void stopServices() throws Exception {
+if (miniHS2 != null && miniHS2.isStarted()) {
+  miniHS2.stop();
+  miniHS2.cleanup();
+  miniHS2 = null;
+  MiniHS2.cleanupLocalDir();
+}
+  }
+
+  @Test
+  public void testHttpJdbcCookies() throws Exception {
+String sqlQuery = "show tables";
+
+Class.forName(HiveDriver.class.getCanonicalName());
+
+String username = System.getProperty("user.name");
+try(Connection connection = 
DriverManager.getConnection(miniHS2.getJdbcURL(), username, "bar")) {
+  assertNotNull(connection);
+
+  CookieStore cookieStore = getCookieStoreFromConnection(connection);
+  assertNotNull(cookieStore);
+
+  // Test that basic cookies worked
+  List cookies1 = cookieStore.getCookies();
+  assertEquals(1, cookies1.size());
+
+  try(Statement statement = connection.createStatement()) {
+assertNotNull(statement);
+try(ResultSet resultSet = statement.executeQuery(sqlQuery)) {
+  assertNotNull(resultSet);
+}
+  }
+
+  // Check that cookies worked and still the same after a statement
+ 

[hive] branch master updated (26cc315 -> 94b43f4)

2020-03-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 26cc315  HIVE-23011: Shared work optimizer should check residual 
predicates when comparing joins (Jesus Camacho Rodriguez, reviewed by Vineet 
Garg)
 new a6a0ba5  HIVE-22901: Variable substitution can lead to OOM on circular 
references (Daniel Voros via Zoltan Haindrich)
 new abc067a  HIVE-22539: HiveServer2 SPNEGO authentication should skip if 
authorization header is empty (Kevin Risden via Zoltan Haindrich)
 new 94b43f4  HIVE-22841: ThriftHttpServlet#getClientNameFromCookie should 
handle CookieSigner IllegalArgumentException on invalid cookie signature (Kevin 
Risden via Zoltan Haindrich)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |   3 +
 .../apache/hadoop/hive/conf/SystemVariables.java   |  10 +
 .../hadoop/hive/conf/TestSystemVariables.java  |  29 +++
 .../org/apache/hive/jdbc/TestRestrictedList.java   |   1 +
 .../auth/TestHttpCookieAuthenticationTest.java | 185 +
 .../java/org/apache/hive/jdbc/HiveConnection.java  |   2 +-
 .../hive/jdbc/HttpRequestInterceptorBase.java  |   5 +-
 .../hive/service/cli/thrift/ThriftHttpServlet.java |  62 +++---
 .../org/apache/hive/service/TestCookieSigner.java  |  53 +++--
 .../cli/thrift/ThriftCliServiceTestWithCookie.java | 231 -
 .../service/cli/thrift/ThriftHttpServletTest.java  |  71 +++
 11 files changed, 372 insertions(+), 280 deletions(-)
 create mode 100644 
itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestHttpCookieAuthenticationTest.java
 delete mode 100644 
service/src/test/org/apache/hive/service/cli/thrift/ThriftCliServiceTestWithCookie.java
 create mode 100644 
service/src/test/org/apache/hive/service/cli/thrift/ThriftHttpServletTest.java



[hive] 01/03: HIVE-22901: Variable substitution can lead to OOM on circular references (Daniel Voros via Zoltan Haindrich)

2020-03-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit a6a0ba59dff73f7ad27bf02a20da1ce06085d0b2
Author: Daniel Voros 
AuthorDate: Tue Mar 17 17:00:17 2020 +

HIVE-22901: Variable substitution can lead to OOM on circular references 
(Daniel Voros via Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |  3 +++
 .../apache/hadoop/hive/conf/SystemVariables.java   | 10 
 .../hadoop/hive/conf/TestSystemVariables.java  | 29 ++
 .../org/apache/hive/jdbc/TestRestrictedList.java   |  1 +
 4 files changed, 43 insertions(+)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 54b33a3..d50912b 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -4791,6 +4791,7 @@ public class HiveConf extends Configuration {
 "hive.spark.client.rpc.max.size," +
 "hive.spark.client.rpc.threads," +
 "hive.spark.client.secret.bits," +
+"hive.query.max.length," +
 "hive.spark.client.rpc.server.address," +
 "hive.spark.client.rpc.server.port," +
 "hive.spark.client.rpc.sasl.mechanisms," +
@@ -4827,6 +4828,8 @@ public class HiveConf extends Configuration {
 SPARK_CLIENT_TYPE.varname,
 "Comma separated list of variables which are related to remote spark 
context.\n" +
 "Changing these variables will result in re-creating the spark 
session."),
+HIVE_QUERY_MAX_LENGTH("hive.query.max.length", "10Mb", new 
SizeValidator(), "The maximum" +
+" size of a query string. Enforced after variable substitutions."),
 HIVE_QUERY_TIMEOUT_SECONDS("hive.query.timeout.seconds", "0s",
 new TimeValidator(TimeUnit.SECONDS),
 "Timeout for Running Query in seconds. A nonpositive value means 
infinite. " +
diff --git a/common/src/java/org/apache/hadoop/hive/conf/SystemVariables.java 
b/common/src/java/org/apache/hadoop/hive/conf/SystemVariables.java
index 695f3ec..89ea20e 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/SystemVariables.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/SystemVariables.java
@@ -88,6 +88,10 @@ public class SystemVariables {
   }
 
   protected final String substitute(Configuration conf, String expr, int 
depth) {
+long maxLength = 0;
+if (conf != null) {
+  maxLength = HiveConf.getSizeVar(conf, 
HiveConf.ConfVars.HIVE_QUERY_MAX_LENGTH);
+}
 Matcher match = varPat.matcher("");
 String eval = expr;
 StringBuilder builder = new StringBuilder();
@@ -107,12 +111,18 @@ public class SystemVariables {
   found = true;
 }
 builder.append(eval.substring(prev, match.start())).append(substitute);
+if (maxLength > 0 && builder.length() > maxLength) {
+  throw new IllegalStateException("Query length longer than 
hive.query.max.length ("+builder.length()+">"+maxLength+").");
+}
 prev = match.end();
   }
   if (!found) {
 return eval;
   }
   builder.append(eval.substring(prev));
+  if (maxLength > 0 && builder.length() > maxLength) {
+throw new IllegalStateException("Query length longer than 
hive.query.max.length ("+builder.length()+">"+maxLength+").");
+  }
   eval = builder.toString();
 }
 if (s > depth) {
diff --git 
a/common/src/test/org/apache/hadoop/hive/conf/TestSystemVariables.java 
b/common/src/test/org/apache/hadoop/hive/conf/TestSystemVariables.java
index 6004aba..3641020 100644
--- a/common/src/test/org/apache/hadoop/hive/conf/TestSystemVariables.java
+++ b/common/src/test/org/apache/hadoop/hive/conf/TestSystemVariables.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hive.conf;
 
+import org.apache.commons.lang3.RandomStringUtils;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.LocalFileSystem;
 import org.apache.hadoop.fs.Path;
@@ -24,6 +25,7 @@ import org.junit.Test;
 
 import static junit.framework.TestCase.assertEquals;
 import static junit.framework.TestCase.assertNull;
+import static org.junit.Assert.fail;
 
 public class TestSystemVariables {
   public static final String SYSTEM = "system";
@@ -74,4 +76,31 @@ public class TestSystemVariables {
 System.setProperty("java.io.tmpdir", "");
 assertEquals("", SystemVariables.substitute(systemJavaIoTmpDir));
   }
+
+  @Test
+  public void test_SubstituteL

[hive] branch master updated: HIVE-22970: Add a qoption to enable tests to use transactional mode (Zoltan Haindrich reviewed by Peter Vary)

2020-03-16 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 5112a9e  HIVE-22970: Add a qoption to enable tests to use 
transactional mode (Zoltan Haindrich reviewed by Peter Vary)
5112a9e is described below

commit 5112a9eaaa292b1f493a125b9d082db468f472a9
Author: Zoltan Haindrich 
AuthorDate: Mon Mar 16 14:33:04 2020 +

HIVE-22970: Add a qoption to enable tests to use transactional mode (Zoltan 
Haindrich reviewed by Peter Vary)

Signed-off-by: Zoltan Haindrich 
---
 .../java/org/apache/hadoop/hive/ql/QTestUtil.java  |  2 +
 .../hadoop/hive/ql/qoption/QTestTransactional.java | 57 ++
 .../queries/clientpositive/schq_materialized.q |  5 +-
 .../clientpositive/llap/schq_materialized.q.out|  2 +-
 4 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
index 09df750..ffc0b2f 100644
--- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
+++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
@@ -79,6 +79,7 @@ import 
org.apache.hadoop.hive.ql.qoption.QTestAuthorizerHandler;
 import org.apache.hadoop.hive.ql.qoption.QTestOptionDispatcher;
 import org.apache.hadoop.hive.ql.qoption.QTestReplaceHandler;
 import org.apache.hadoop.hive.ql.qoption.QTestSysDbHandler;
+import org.apache.hadoop.hive.ql.qoption.QTestTransactional;
 import org.apache.hadoop.hive.ql.scheduled.QTestScheduledQueryCleaner;
 import org.apache.hadoop.hive.ql.scheduled.QTestScheduledQueryServiceProvider;
 import org.apache.hadoop.hive.ql.session.SessionState;
@@ -214,6 +215,7 @@ public class QTestUtil {
 dispatcher.register("dataset", datasetHandler);
 dispatcher.register("replace", replaceHandler);
 dispatcher.register("sysdb", new QTestSysDbHandler());
+dispatcher.register("transactional", new QTestTransactional());
 dispatcher.register("scheduledqueryservice", new 
QTestScheduledQueryServiceProvider(conf));
 dispatcher.register("scheduledquerycleaner", new 
QTestScheduledQueryCleaner());
 dispatcher.register("authorizer", new QTestAuthorizerHandler());
diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestTransactional.java
 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestTransactional.java
new file mode 100644
index 000..463cc73
--- /dev/null
+++ 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestTransactional.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.qoption;
+
+import org.apache.hadoop.hive.ql.QTestUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * QTest transactional directive handler
+ *
+ * Enables transactional for the test.
+ * Could also make it for other QOption-s.
+ *
+ * Example:
+ * --! qt:transactional
+ *
+ */
+public class QTestTransactional implements QTestOptionHandler {
+  private static final Logger LOG = 
LoggerFactory.getLogger(QTestTransactional.class.getName());
+  private boolean enabled;
+
+  @Override
+  public void processArguments(String arguments) {
+enabled = true;
+  }
+
+  @Override
+  public void beforeTest(QTestUtil qt) throws Exception {
+if (enabled) {
+  qt.getConf().set("hive.support.concurrency", "true");
+  qt.getConf().set("hive.txn.manager", 
"org.apache.hadoop.hive.ql.lockmgr.DbTxnManager");
+}
+  }
+
+  @Override
+  public void afterTest(QTestUtil qt) throws Exception {
+enabled = false;
+  }
+
+}
diff --git a/ql/src/test/queries/clientpositive/schq_materialized.q 
b/ql/src/test/queries/clientpositive/schq_materialized.q
index 9848f9f..7242f3e 100644
--- a/ql/src/test/queries/clientpositive/schq_materialized.q
+++ b/ql/src/test/queries/clientpositive/schq_materialized.q
@@ -1,5 +1,6 @@
 --! qt:aut

[hive] branch master updated: HIVE-23008: UDAFExampleMaxMinNUtil.sortedMerge must be able to handle all inputs (Zoltan Haindrich reviewed by Miklos Gergely)

2020-03-13 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 0a034d7  HIVE-23008: UDAFExampleMaxMinNUtil.sortedMerge must be able 
to handle all inputs (Zoltan Haindrich reviewed by Miklos Gergely)
0a034d7 is described below

commit 0a034d7921460346d98d0bc0b1f95c793b84d297
Author: Zoltan Haindrich 
AuthorDate: Fri Mar 13 13:01:29 2020 +

HIVE-23008: UDAFExampleMaxMinNUtil.sortedMerge must be able to handle all 
inputs (Zoltan Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../udaf/example/UDAFExampleMaxMinNUtil.java   |  2 +-
 .../udaf/example/TestUDAFExampleMaxMinNUtil.java   | 36 ++
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git 
a/contrib/src/java/org/apache/hadoop/hive/contrib/udaf/example/UDAFExampleMaxMinNUtil.java
 
b/contrib/src/java/org/apache/hadoop/hive/contrib/udaf/example/UDAFExampleMaxMinNUtil.java
index 2ea9ad6..ee286bd 100644
--- 
a/contrib/src/java/org/apache/hadoop/hive/contrib/udaf/example/UDAFExampleMaxMinNUtil.java
+++ 
b/contrib/src/java/org/apache/hadoop/hive/contrib/udaf/example/UDAFExampleMaxMinNUtil.java
@@ -192,7 +192,7 @@ public final class UDAFExampleMaxMinNUtil {
 break;
   }
   if (p2 < n2) {
-if (p1 == n1 || comparator.compare(a2.get(p2), a1.get(p1)) < 0) {
+if (p1 == n1 || comparator.compare(a2.get(p2), a1.get(p1)) <= 0) {
   output.add(a2.get(p2++));
 }
   }
diff --git 
a/contrib/src/test/org/apache/hadoop/hive/contrib/udaf/example/TestUDAFExampleMaxMinNUtil.java
 
b/contrib/src/test/org/apache/hadoop/hive/contrib/udaf/example/TestUDAFExampleMaxMinNUtil.java
new file mode 100644
index 000..2937ad5
--- /dev/null
+++ 
b/contrib/src/test/org/apache/hadoop/hive/contrib/udaf/example/TestUDAFExampleMaxMinNUtil.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.contrib.udaf.example;
+
+import java.util.List;
+
+import org.junit.Test;
+
+import com.google.common.collect.Lists;
+
+public class TestUDAFExampleMaxMinNUtil {
+
+  @Test(timeout = 5000)
+  public void testSortedMerge() {
+
+List li1 = Lists.newArrayList(1, 2, 3, 4, 5);
+List li2 = Lists.newArrayList(1, 2, 3, 4, 5);
+UDAFExampleMaxMinNUtil.sortedMerge(li1, li2, true, 5);
+  }
+
+}



[hive] branch master updated: HIVE-16355 HIVE-22893: addendum - missing ASF headers

2020-03-13 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 4da3c68  HIVE-16355 HIVE-22893: addendum - missing ASF headers
4da3c68 is described below

commit 4da3c68e14ec34279b60f431180111f142248860
Author: Zoltan Haindrich 
AuthorDate: Fri Mar 13 12:12:56 2020 +

HIVE-16355 HIVE-22893: addendum - missing ASF headers
---
 .../apache/hive/jdbc/EmbeddedCLIServicePortal.java   | 18 ++
 .../ql/stats/estimator/PessimisticStatCombiner.java  | 20 +++-
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/jdbc/src/java/org/apache/hive/jdbc/EmbeddedCLIServicePortal.java 
b/jdbc/src/java/org/apache/hive/jdbc/EmbeddedCLIServicePortal.java
index c572ecc..a389285 100644
--- a/jdbc/src/java/org/apache/hive/jdbc/EmbeddedCLIServicePortal.java
+++ b/jdbc/src/java/org/apache/hive/jdbc/EmbeddedCLIServicePortal.java
@@ -1,3 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.hive.jdbc;
 
 import java.util.Map;
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/stats/estimator/PessimisticStatCombiner.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/stats/estimator/PessimisticStatCombiner.java
index 131b422..dde2019 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/stats/estimator/PessimisticStatCombiner.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/stats/estimator/PessimisticStatCombiner.java
@@ -1,3 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.hadoop.hive.ql.stats.estimator;
 
 import java.util.Optional;
@@ -46,4 +64,4 @@ public class PessimisticStatCombiner {
 return Optional.of(result);
 
   }
-}
\ No newline at end of file
+}



[hive] branch master updated: HIVE-23003: CliDriver leaves the session id in the threadname on failure (Zoltan Haindrich reviewed by Miklos Gergely)

2020-03-11 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ba52637  HIVE-23003: CliDriver leaves the session id in the threadname 
on failure (Zoltan Haindrich reviewed by Miklos Gergely)
ba52637 is described below

commit ba52637e41ca3a5e5952d92ecd27bf8cb69411ca
Author: Zoltan Haindrich 
AuthorDate: Wed Mar 11 13:12:48 2020 +

HIVE-23003: CliDriver leaves the session id in the threadname on failure 
(Zoltan Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java 
b/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java
index cdd08ce..cfea602 100644
--- a/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java
+++ b/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java
@@ -120,11 +120,19 @@ public class CliDriver {
   public CommandProcessorResponse processCmd(String cmd) throws 
CommandProcessorException {
 CliSessionState ss = (CliSessionState) SessionState.get();
 ss.setLastCommand(cmd);
-
-ss.updateThreadName();
-
 // Flush the print stream, so it doesn't include output from the last 
command
 ss.err.flush();
+try {
+  ss.updateThreadName();
+  return processCmd1(cmd);
+} finally {
+  ss.resetThreadName();
+}
+  }
+
+  public CommandProcessorResponse processCmd1(String cmd) throws 
CommandProcessorException {
+CliSessionState ss = (CliSessionState) SessionState.get();
+
 String cmd_trimmed = HiveStringUtils.removeComments(cmd).trim();
 String[] tokens = tokenizeCmd(cmd_trimmed);
 CommandProcessorResponse response = new CommandProcessorResponse();
@@ -206,7 +214,6 @@ public class CliDriver {
   }
 }
 
-ss.resetThreadName();
 return response;
   }
 



[hive] branch master updated: HIVE-22872: Support multiple executors for scheduled queries (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)

2020-03-03 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 9cdf97f  HIVE-22872: Support multiple executors for scheduled queries 
(Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)
9cdf97f is described below

commit 9cdf97f3f851fd835f3c5caae676e1cd737816ec
Author: Zoltan Haindrich 
AuthorDate: Tue Mar 3 13:34:18 2020 +

HIVE-22872: Support multiple executors for scheduled queries (Zoltan 
Haindrich reviewed by Jesus Camacho Rodriguez)

Signed-off-by: Zoltan Haindrich 
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |   2 +
 .../upgrade/hive/hive-schema-4.0.0.hive.sql|   7 +-
 .../exec/schq/ScheduledQueryMaintenanceTask.java   |   8 +-
 .../scheduled/ScheduledQueryExecutionContext.java  |   4 +
 .../scheduled/ScheduledQueryExecutionService.java  | 170 +
 .../hive/ql/schq/TestScheduledQueryService.java|   4 -
 ql/src/test/queries/clientpositive/schq_analyze.q  |   2 +-
 .../queries/clientpositive/schq_materialized.q |   2 +-
 .../clientpositive/llap/schq_materialized.q.out|   2 +-
 .../test/results/clientpositive/llap/sysdb.q.out   |  10 +-
 .../results/clientpositive/llap/sysdb_schq.q.out   |   6 +-
 .../hadoop/hive/metastore/MetastoreTaskThread.java |  10 +-
 .../hadoop/hive/metastore/utils/package-info.java  |  22 ---
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  23 ++-
 .../ScheduledQueryExecutionsMaintTask.java |   7 +
 .../hive/metastore/model/MScheduledQuery.java  |   9 ++
 .../src/main/resources/package.jdo |   4 +
 .../src/main/sql/derby/hive-schema-4.0.0.derby.sql |   3 +-
 .../sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql |   2 +
 .../src/main/sql/mssql/hive-schema-4.0.0.mssql.sql |  29 
 .../sql/mssql/upgrade-3.2.0-to-4.0.0.mssql.sql |  30 
 .../src/main/sql/mysql/hive-schema-4.0.0.mysql.sql |   1 +
 .../sql/mysql/upgrade-3.2.0-to-4.0.0.mysql.sql |   2 +
 .../main/sql/oracle/hive-schema-4.0.0.oracle.sql   |   1 +
 .../sql/oracle/upgrade-3.2.0-to-4.0.0.oracle.sql   |   2 +
 .../sql/postgres/hive-schema-4.0.0.postgres.sql|   1 +
 .../postgres/upgrade-3.2.0-to-4.0.0.postgres.sql   |   2 +
 27 files changed, 290 insertions(+), 75 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 3d4e9e0..7ea2de9 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -4873,6 +4873,8 @@ public class HiveConf extends Configuration {
 
HIVE_SECURITY_AUTHORIZATION_SCHEDULED_QUERIES_SUPPORTED("hive.security.authorization.scheduled.queries.supported",
 false,
 "Enable this if the configured authorizer is able to handle scheduled 
query related calls."),
+
HIVE_SCHEDULED_QUERIES_MAX_EXECUTORS("hive.scheduled.queries.max.executors", 4, 
new RangeValidator(1, null),
+"Maximal number of scheduled query executors to allow."),
 
 HIVE_QUERY_RESULTS_CACHE_ENABLED("hive.query.results.cache.enabled", true,
 "If the query results cache is enabled. This will keep results of 
previously executed queries " +
diff --git a/metastore/scripts/upgrade/hive/hive-schema-4.0.0.hive.sql 
b/metastore/scripts/upgrade/hive/hive-schema-4.0.0.hive.sql
index fde6f02..03540bb 100644
--- a/metastore/scripts/upgrade/hive/hive-schema-4.0.0.hive.sql
+++ b/metastore/scripts/upgrade/hive/hive-schema-4.0.0.hive.sql
@@ -1211,6 +1211,7 @@ CREATE EXTERNAL TABLE IF NOT EXISTS `SCHEDULED_QUERIES` (
   `USER` string,
   `QUERY` string,
   `NEXT_EXECUTION` bigint,
+  `ACTIVE_EXECUTION_ID` bigint,
   CONSTRAINT `SYS_PK_SCHEDULED_QUERIES` PRIMARY KEY (`SCHEDULED_QUERY_ID`) 
DISABLE
 )
 STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
@@ -1225,7 +1226,8 @@ TBLPROPERTIES (
   \"SCHEDULE\",
   \"USER\",
   \"QUERY\",
-  \"NEXT_EXECUTION\"
+  \"NEXT_EXECUTION\",
+  \"ACTIVE_EXECUTION_ID\"
 FROM
   \"SCHEDULED_QUERIES\""
 );
@@ -1795,7 +1797,8 @@ select
   `SCHEDULE`,
   `USER`,
   `QUERY`,
-  FROM_UNIXTIME(NEXT_EXECUTION) as NEXT_EXECUTION
+  FROM_UNIXTIME(NEXT_EXECUTION) as NEXT_EXECUTION,
+  `ACTIVE_EXECUTION_ID`
 FROM
   SYS.SCHEDULED_QUERIES
 ;
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/schq/ScheduledQueryMaintenanceTask.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/schq/ScheduledQueryMaintenanceTask.java
index fd0c173..5abfa4d 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/schq/ScheduledQueryMaintenanceTask.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/schq/ScheduledQueryMaintenanceTask.java
@@ -19,

[hive] branch master updated (e0a0db3 -> 4700e21)

2020-02-19 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from e0a0db3  HIVE-22860 : Support metadata only replication for external 
tables. (Aasha Medhi, reviewed by Mahesh Kumar Behera)
 add 4700e21  HIVE-16355: Service: embedded mode should only be available 
if service is loaded onto the classpath (Zoltan Haindrich reviewed by Peter 
Vary, Miklos Gergely)

No new revisions were added by this update.

Summary of changes:
 .../apache/hive/jdbc/EmbeddedCLIServicePortal.java | 42 ++
 .../java/org/apache/hive/jdbc/HiveConnection.java  | 30 +---
 pom.xml|  2 +-
 .../apache/hive/service/auth/HiveAuthFactory.java  |  4 +--
 .../hive/service/auth/KerberosSaslHelper.java  |  7 ++--
 .../apache/hive/service/auth/PlainSaslHelper.java  |  8 ++---
 .../cli/thrift/EmbeddedThriftBinaryCLIService.java | 22 
 7 files changed, 69 insertions(+), 46 deletions(-)
 create mode 100644 
jdbc/src/java/org/apache/hive/jdbc/EmbeddedCLIServicePortal.java



[hive] branch master updated: HIVE-22866: addendum - increase sleep

2020-02-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 7065044  HIVE-22866: addendum - increase sleep
7065044 is described below

commit 70650441db4255d41478c91fdf5ec1d13e6d82f6
Author: Zoltan Haindrich 
AuthorDate: Mon Feb 17 20:27:22 2020 +

HIVE-22866: addendum - increase sleep
---
 ql/src/test/queries/clientpositive/schq_ingest.q | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ql/src/test/queries/clientpositive/schq_ingest.q 
b/ql/src/test/queries/clientpositive/schq_ingest.q
index d7283d5..b7bc90c 100644
--- a/ql/src/test/queries/clientpositive/schq_ingest.q
+++ b/ql/src/test/queries/clientpositive/schq_ingest.q
@@ -39,7 +39,7 @@ insert into s values(2,2),(3,3);
 -- pretend that a timeout have happened
 alter scheduled query ingest execute;
 
-!sleep 3;
+!sleep 10;
 select state,error_message from sys.scheduled_executions;
 
 select * from t order by id;



[hive] 02/03: HIVE-22866: Add more testcases for scheduled queries (Zoltan Haindrich reviewed by Miklos Gergely)

2020-02-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 0581c0061da678920411768da97e933fb9b87d50
Author: Zoltan Haindrich 
AuthorDate: Mon Feb 17 12:07:05 2020 +

HIVE-22866: Add more testcases for scheduled queries (Zoltan Haindrich 
reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../test/resources/testconfiguration.properties|   2 +
 ql/src/test/queries/clientpositive/schq_analyze.q  |  31 +
 ql/src/test/queries/clientpositive/schq_ingest.q   |  45 +++
 .../queries/clientpositive/schq_materialized.q |  28 +++-
 .../results/clientpositive/llap/schq_analyze.q.out | 110 
 .../results/clientpositive/llap/schq_ingest.q.out  | 124 ++
 .../clientpositive/llap/schq_materialized.q.out| 141 -
 7 files changed, 472 insertions(+), 9 deletions(-)

diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index 3108d16..1b1bf11 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -790,6 +790,8 @@ minillaplocal.query.files=\
   sysdb.q,\
   sysdb_schq.q,\
   schq_materialized.q,\
+  schq_analyze.q,\
+  schq_ingest.q,\
   table_access_keys_stats.q,\
   temp_table_llap_partitioned.q,\
   tez_bmj_schema_evolution.q,\
diff --git a/ql/src/test/queries/clientpositive/schq_analyze.q 
b/ql/src/test/queries/clientpositive/schq_analyze.q
new file mode 100644
index 000..969b47b
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/schq_analyze.q
@@ -0,0 +1,31 @@
+--! qt:authorizer
+--! qt:scheduledqueryservice
+--! qt:sysdb
+
+set user.name=hive_admin_user;
+set role admin;
+
+-- create external table
+create external table t (a integer);
+ 
+-- disable autogather
+set hive.stats.autogather=false;
+ 
+insert into t values (1),(2),(3);
+
+-- basic stats show that the table has "0" rows
+desc formatted t;
+
+-- create a schedule to compute stats
+create scheduled query t_analyze cron '0 */1 * * * ? *' as analyze table t 
compute statistics for columns;
+
+alter scheduled query t_analyze execute;
+
+!sleep 3; 
+ 
+select * from information_schema.scheduled_executions s where 
schedule_name='ex_analyze' order by scheduled_execution_id desc limit 3;
+ 
+-- and the numrows have been updated
+desc formatted t;
+ 
+
diff --git a/ql/src/test/queries/clientpositive/schq_ingest.q 
b/ql/src/test/queries/clientpositive/schq_ingest.q
new file mode 100644
index 000..d7283d5
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/schq_ingest.q
@@ -0,0 +1,45 @@
+--! qt:authorizer
+--! qt:scheduledqueryservice
+--! qt:sysdb
+
+set user.name=hive_admin_user;
+set role admin;
+
+drop table if exists t;
+drop table if exists s;
+ 
+-- suppose that this table is an external table or something
+-- which supports the pushdown of filter condition on the id column
+create table s(id integer, cnt integer);
+ 
+-- create an internal table and an offset table
+create table t(id integer, cnt integer);
+create table t_offset(offset integer);
+insert into t_offset values(0);
+ 
+-- pretend that data is added to s
+insert into s values(1,1);
+ 
+-- run an ingestion...
+from (select id==offset as first,* from s
+join t_offset on id>=offset) s1
+insert into t select id,cnt where not first
+insert overwrite table t_offset select max(s1.id);
+ 
+-- configure to run ingestion every 10 minutes
+create scheduled query ingest every 10 minutes defined as
+from (select id==offset as first,* from s
+join t_offset on id>=offset) s1
+insert into t select id,cnt where not first
+insert overwrite table t_offset select max(s1.id);
+ 
+-- add some new values
+insert into s values(2,2),(3,3);
+ 
+-- pretend that a timeout have happened
+alter scheduled query ingest execute;
+
+!sleep 3;
+select state,error_message from sys.scheduled_executions;
+
+select * from t order by id;
diff --git a/ql/src/test/queries/clientpositive/schq_materialized.q 
b/ql/src/test/queries/clientpositive/schq_materialized.q
index fae5239..6baed49 100644
--- a/ql/src/test/queries/clientpositive/schq_materialized.q
+++ b/ql/src/test/queries/clientpositive/schq_materialized.q
@@ -1,5 +1,10 @@
+--! qt:authorizer
+--! qt:scheduledqueryservice
 --! qt:sysdb
 
+set user.name=hive_admin_user;
+set role admin;
+
 drop materialized view if exists mv1;
 drop table if exists emps;
 drop table if exists depts;
@@ -42,16 +47,31 @@ CREATE MATERIALIZED VIEW mv1 AS
 JOIN depts ON (emps.deptno = depts.deptno)
 WHERE hire_date >= '2016-01-01 00:00:00';
 
+-- mv1 is used
+EXPLAIN
+SELECT empid, deptname FROM emps
+JOIN depts ON (emps.deptno = depts.deptno)
+WHERE hire_date >= '2018-01-01';
+
+-- insert a new record
+insert into emps values (1330, 10, 'Bill', 1000

[hive] 01/03: HIVE-16502: Relax hard dependency on SessionState in Authentication classes (Zoltan Haindrich reviewed by Miklos Gergely)

2020-02-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 59b4c769e2f08a06b236a546e2662d7dc6dd185c
Author: Zoltan Haindrich 
AuthorDate: Mon Feb 17 12:06:51 2020 +

HIVE-16502: Relax hard dependency on SessionState in Authentication classes 
(Zoltan Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../hive/ql/security/DummyAuthenticator.java   |  4 +--
 .../ql/security/InjectableDummyAuthenticator.java  |  4 +--
 .../ql/security/HadoopDefaultAuthenticator.java|  5 ++--
 .../ql/security/HiveAuthenticationProvider.java|  4 +--
 .../SessionStateConfigUserAuthenticator.java   |  5 ++--
 .../ql/security/SessionStateUserAuthenticator.java |  6 ++--
 .../ISessionAuthState.java}| 32 --
 .../hadoop/hive/ql/session/SessionState.java   |  4 ++-
 8 files changed, 28 insertions(+), 36 deletions(-)

diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/DummyAuthenticator.java
 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/DummyAuthenticator.java
index 45fabf5..8e8c6a0 100644
--- 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/DummyAuthenticator.java
+++ 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/DummyAuthenticator.java
@@ -22,7 +22,7 @@ import java.util.List;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
-import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hadoop.hive.ql.session.ISessionAuthState;
 
 public class DummyAuthenticator implements HiveAuthenticationProvider {
 
@@ -63,7 +63,7 @@ public class DummyAuthenticator implements 
HiveAuthenticationProvider {
   }
 
   @Override
-  public void setSessionState(SessionState ss) {
+  public void setSessionState(ISessionAuthState ss) {
 //no op
   }
 
diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/InjectableDummyAuthenticator.java
 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/InjectableDummyAuthenticator.java
index c0ca4b3..6a33a15 100644
--- 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/InjectableDummyAuthenticator.java
+++ 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/InjectableDummyAuthenticator.java
@@ -22,7 +22,7 @@ import java.util.List;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.metastore.IHMSHandler;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
-import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hadoop.hive.ql.session.ISessionAuthState;
 
 /**
  *
@@ -101,7 +101,7 @@ public class InjectableDummyAuthenticator implements 
HiveMetastoreAuthentication
   }
 
   @Override
-  public void setSessionState(SessionState arg0) {
+  public void setSessionState(ISessionAuthState arg0) {
 //no-op
   }
 
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/security/HadoopDefaultAuthenticator.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/security/HadoopDefaultAuthenticator.java
index f5d5856..24c0b53 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/security/HadoopDefaultAuthenticator.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/security/HadoopDefaultAuthenticator.java
@@ -23,8 +23,7 @@ import java.util.List;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
-import org.apache.hadoop.hive.ql.session.SessionState;
-import org.apache.hadoop.hive.shims.ShimLoader;
+import org.apache.hadoop.hive.ql.session.ISessionAuthState;
 import org.apache.hadoop.hive.shims.Utils;
 import org.apache.hadoop.security.UserGroupInformation;
 
@@ -77,7 +76,7 @@ public class HadoopDefaultAuthenticator implements 
HiveAuthenticationProvider {
   }
 
   @Override
-  public void setSessionState(SessionState ss) {
+  public void setSessionState(ISessionAuthState ss) {
 //no op
   }
 
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/security/HiveAuthenticationProvider.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/security/HiveAuthenticationProvider.java
index 25eb2a21..43c5a09 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/security/HiveAuthenticationProvider.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/security/HiveAuthenticationProvider.java
@@ -22,7 +22,7 @@ import java.util.List;
 
 import org.apache.hadoop.conf.Configurable;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
-import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hadoop.hive.ql.session.ISessionAuthState;
 
 /**
  * HiveAuthenticationProvider is an interface for authentication. The
@@ -41,6 +41,6 @@ public interface HiveAuthenticationProvider extends 
Configurable{
* SessionState is not a public interface.
* @param ss SessionState that created this instance
*/
-  public void

[hive] branch master updated (4ff6a67 -> e97ff5b)

2020-02-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 4ff6a67  HIVE-22877: Fix decimal boundary check for casting to 
Decimal64 (Mustafa Iman via Gopal Vijayaraghavan)
 new 59b4c76  HIVE-16502: Relax hard dependency on SessionState in 
Authentication classes (Zoltan Haindrich reviewed by Miklos Gergely)
 new 0581c00  HIVE-22866: Add more testcases for scheduled queries (Zoltan 
Haindrich reviewed by Miklos Gergely)
 new e97ff5b  HIVE-22873: Make it possible to identify which hs2 instance 
executed a scheduled query (Zoltan Haindrich reviewed by Miklos Gergely)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../test/resources/testconfiguration.properties|   2 +
 .../hive/ql/security/DummyAuthenticator.java   |   4 +-
 .../ql/security/InjectableDummyAuthenticator.java  |   4 +-
 .../scheduled/ScheduledQueryExecutionContext.java  |  11 ++
 .../scheduled/ScheduledQueryExecutionService.java  |  25 ++--
 .../ql/security/HadoopDefaultAuthenticator.java|   5 +-
 .../ql/security/HiveAuthenticationProvider.java|   4 +-
 .../SessionStateConfigUserAuthenticator.java   |   5 +-
 .../ql/security/SessionStateUserAuthenticator.java |   6 +-
 .../Hook.java => session/ISessionAuthState.java}   |  19 +--
 .../hadoop/hive/ql/session/SessionState.java   |   4 +-
 .../hive/ql/schq/TestScheduledQueryService.java|  31 +++--
 ql/src/test/queries/clientpositive/schq_analyze.q  |  31 +
 ql/src/test/queries/clientpositive/schq_ingest.q   |  45 +++
 .../queries/clientpositive/schq_materialized.q |  28 +++-
 .../results/clientpositive/llap/schq_analyze.q.out | 110 
 .../results/clientpositive/llap/schq_ingest.q.out  | 124 ++
 .../clientpositive/llap/schq_materialized.q.out| 141 -
 18 files changed, 545 insertions(+), 54 deletions(-)
 copy ql/src/java/org/apache/hadoop/hive/ql/{hooks/Hook.java => 
session/ISessionAuthState.java} (74%)
 create mode 100644 ql/src/test/queries/clientpositive/schq_analyze.q
 create mode 100644 ql/src/test/queries/clientpositive/schq_ingest.q
 create mode 100644 ql/src/test/results/clientpositive/llap/schq_analyze.q.out
 create mode 100644 ql/src/test/results/clientpositive/llap/schq_ingest.q.out



[hive] 03/03: HIVE-22873: Make it possible to identify which hs2 instance executed a scheduled query (Zoltan Haindrich reviewed by Miklos Gergely)

2020-02-17 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit e97ff5b3df4258fa83a15507000d7e42c2aac8f4
Author: Zoltan Haindrich 
AuthorDate: Mon Feb 17 12:07:18 2020 +

HIVE-22873: Make it possible to identify which hs2 instance executed a 
scheduled query (Zoltan Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../scheduled/ScheduledQueryExecutionContext.java  | 11 
 .../scheduled/ScheduledQueryExecutionService.java  | 25 ++---
 .../hive/ql/schq/TestScheduledQueryService.java| 31 +-
 3 files changed, 45 insertions(+), 22 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionContext.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionContext.java
index 9decb8c..1bb24ee 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionContext.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionContext.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hive.ql.scheduled;
 
+import java.net.InetAddress;
+import java.net.UnknownHostException;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.TimeUnit;
 
@@ -33,6 +35,7 @@ public class ScheduledQueryExecutionContext {
   public final ExecutorService executor;
   public final IScheduledQueryMaintenanceService schedulerService;
   public final HiveConf conf;
+  public final String executorHostName;
 
   public ScheduledQueryExecutionContext(
   ExecutorService executor,
@@ -41,6 +44,14 @@ public class ScheduledQueryExecutionContext {
 this.executor = executor;
 this.conf = conf;
 this.schedulerService = service;
+try {
+  this.executorHostName = InetAddress.getLocalHost().getHostName();
+  if (executorHostName == null) {
+throw new RuntimeException("Hostname is null; Can't function without a 
valid hostname!");
+  }
+} catch (UnknownHostException e) {
+  throw new RuntimeException("Can't function without a valid hostname!", 
e);
+}
   }
 
   /**
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
index 06cfe3f..9a6237c 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
@@ -51,23 +51,27 @@ public class ScheduledQueryExecutionService implements 
Closeable {
   private ScheduledQueryExecutor worker;
   private AtomicInteger forcedScheduleCheckCounter = new AtomicInteger();
 
-  public static ScheduledQueryExecutionService 
startScheduledQueryExecutorService(HiveConf conf0) {
+  public static ScheduledQueryExecutionService 
startScheduledQueryExecutorService(HiveConf inputConf) {
+HiveConf conf = new HiveConf(inputConf);
+MetastoreBasedScheduledQueryService qService = new 
MetastoreBasedScheduledQueryService(conf);
+ExecutorService executor = Executors.newCachedThreadPool(
+new ThreadFactoryBuilder().setDaemon(true).setNameFormat("Scheduled 
Query Thread %d").build());
+ScheduledQueryExecutionContext ctx = new 
ScheduledQueryExecutionContext(executor, conf, qService);
+return startScheduledQueryExecutorService(ctx);
+  }
+
+  public static ScheduledQueryExecutionService 
startScheduledQueryExecutorService(ScheduledQueryExecutionContext ctx) {
 synchronized (ScheduledQueryExecutionService.class) {
   if (INSTANCE != null) {
 throw new IllegalStateException(
 "There is already a ScheduledQueryExecutionService in service; 
check it and close it explicitly if neccessary");
   }
-  HiveConf conf = new HiveConf(conf0);
-  MetastoreBasedScheduledQueryService qService = new 
MetastoreBasedScheduledQueryService(conf);
-  ExecutorService executor = Executors.newCachedThreadPool(
-  new ThreadFactoryBuilder().setDaemon(true).setNameFormat("Scheduled 
Query Thread %d").build());
-  ScheduledQueryExecutionContext ctx = new 
ScheduledQueryExecutionContext(executor, conf, qService);
   INSTANCE = new ScheduledQueryExecutionService(ctx);
   return INSTANCE;
 }
   }
 
-  public ScheduledQueryExecutionService(ScheduledQueryExecutionContext ctx) {
+  private ScheduledQueryExecutionService(ScheduledQueryExecutionContext ctx) {
 context = ctx;
 ctx.executor.submit(worker = new ScheduledQueryExecutor());
 ctx.executor.submit(new ProgressReporter());
@@ -138,7 +142,7 @@ public class ScheduledQueryExecutionService implements 
Closeable {
 reportQueryProgress();
 try (
   IDriver driver = 
DriverFactory.newDriver(DriverFactory.getNewQ

[hive] branch master updated (8f46884 -> 59d8665)

2020-02-12 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 8f46884  HIVE-22864 Add option to DatabaseRule to run the Schema Tool 
in verbose mode for tests (Miklos Gergely, reviewed by Laszlo Bodor)
 add 59d8665  HIVE-22781: Add ability to immediately execute a scheduled 
query (Zoltan Haindrich reviewed by Miklos Gergely)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/hive/ql/parse/HiveLexer.g|  1 +
 .../org/apache/hadoop/hive/ql/parse/HiveParser.g   |  2 +
 .../hadoop/hive/ql/parse/IdentifiersParser.g   |  2 +-
 .../exec/schq/ScheduledQueryMaintenanceTask.java   |  4 ++
 .../hive/ql/parse/ScheduledQueryAnalyzer.java  |  6 +++
 .../scheduled/ScheduledQueryExecutionService.java  | 63 --
 .../hive/ql/schq/TestScheduledQueryStatements.java | 24 -
 .../apache/hadoop/hive/metastore/ObjectStore.java  | 12 +++--
 8 files changed, 93 insertions(+), 21 deletions(-)



[hive] branch master updated (effe7e4 -> a428a49)

2020-02-06 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from effe7e4  HIVE-22805: Vectorization with conditional array or map is 
not implemented and throws an error (Peter Vary reviewed by Ramesh Kumar 
Thangarajan and Marta Kuczora)
 add 5f49b9f  HIVE-22358: Add schedule shorthands for convinience (Zoltan 
Haindrich reviewed by Jesus Camacho Rodriguez)
 add 002d4e1  HIVE-22803: Mark scheduled queries executions to help 
end-user identify it easier (Zoltan Haindrich reviewed by Jesus Camacho 
Rodriguez)
 add 89cf44f  HIVE-22809: Support materialized view rebuild as a scheduled 
query (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)
 add a428a49  HIVE-22775: Use the qt:authorizer option in qtests (Zoltan 
Haindrich reviewed by Miklos Gergely)

No new revisions were added by this update.

Summary of changes:
 .../hive/schq/TestScheduledQueryIntegration.java   |  19 ++-
 .../test/resources/testconfiguration.properties|   1 +
 .../java/org/apache/hadoop/hive/ql/QTestUtil.java  |   3 +-
 .../hive/ql/qoption/QTestAuthorizerHandler.java|   3 +
 .../hive/ql/qoption/QTestOptionDispatcher.java |  14 +-
 .../org/apache/hadoop/hive/ql/parse/HiveLexer.g|   2 +
 .../org/apache/hadoop/hive/ql/parse/HiveParser.g   |   4 +
 .../hadoop/hive/ql/parse/IdentifiersParser.g   |   2 +-
 ql/src/java/org/apache/hadoop/hive/ql/Driver.java  |   3 +-
 .../AlterMaterializedViewRebuildAnalyzer.java  |   9 +-
 .../apache/hadoop/hive/ql/hooks/HookContext.java   |   2 -
 .../hadoop/hive/ql/lockmgr/DbTxnManager.java   |   3 +-
 .../hive/ql/parse/ScheduledQueryAnalyzer.java  | 106 ++
 .../hadoop/hive/ql/parse/UnparseTranslator.java|   4 +-
 .../hive/ql/schq/TestScheduledQueryStatements.java |  57 +++-
 .../clientnegative/authorization_addpartition.q|   5 +-
 .../clientnegative/authorization_alter_db_owner.q  |   5 +-
 .../authorization_alter_db_owner_default.q |   5 +-
 .../clientnegative/authorization_alter_drop_ptn.q  |   5 +-
 ...orization_alter_table_exchange_partition_fail.q |   9 +-
 ...rization_alter_table_exchange_partition_fail2.q |  10 +-
 .../clientnegative/authorization_create_func1.q|   5 +-
 .../clientnegative/authorization_create_func2.q|   5 +-
 .../clientnegative/authorization_create_macro1.q   |   5 +-
 .../clientnegative/authorization_create_tbl.q  |   5 +-
 .../clientnegative/authorization_create_view.q |   5 +-
 .../clientnegative/authorization_createview.q  |   5 +-
 .../queries/clientnegative/authorization_ctas.q|   5 +-
 .../queries/clientnegative/authorization_ctas2.q   |   5 +-
 .../authorization_delete_nodeletepriv.q|   5 +-
 .../authorization_desc_table_nosel.q   |   5 +-
 .../clientnegative/authorization_drop_db_cascade.q |   5 +-
 .../clientnegative/authorization_drop_db_empty.q   |   5 +-
 .../clientnegative/authorization_droppartition.q   |   5 +-
 .../clientnegative/authorization_export_ptn.q  |  10 +-
 .../queries/clientnegative/authorization_import.q  |   5 +-
 .../clientnegative/authorization_import_ptn.q  |  10 +-
 .../authorization_insert_noinspriv.q   |   5 +-
 .../authorization_insert_noselectpriv.q|   5 +-
 .../authorization_insertoverwrite_nodel.q  |   5 +-
 .../authorization_insertpart_noinspriv.q   |   5 +-
 .../clientnegative/authorization_jdbc_keystore.q   |   5 +-
 .../queries/clientnegative/authorization_msck.q|   5 +-
 .../authorization_not_owner_alter_tab_rename.q |   5 +-
 .../authorization_not_owner_alter_tab_serdeprop.q  |   5 +-
 .../authorization_not_owner_drop_tab.q |   5 +-
 .../authorization_not_owner_drop_tab2.q|   5 +-
 .../authorization_not_owner_drop_view.q|   5 +-
 .../authorization_rolehierarchy_privs.q|   5 +-
 .../queries/clientnegative/authorization_select.q  |   5 +-
 .../clientnegative/authorization_select_view.q |   5 +-
 .../clientnegative/authorization_show_columns.q|   5 +-
 .../authorization_show_grant_otherrole.q   |   5 +-
 .../authorization_show_grant_otheruser_all.q   |   5 +-
 .../authorization_show_grant_otheruser_alltabs.q   |   5 +-
 .../authorization_show_grant_otheruser_wtab.q  |   5 +-
 .../authorization_show_parts_nosel.q   |   5 +-
 .../clientnegative/authorization_truncate.q|   5 +-
 .../clientnegative/authorization_truncate_2.q  |   5 +-
 .../authorization_update_noupdatepriv.q|   5 +-
 .../authorization_uri_add_partition.q  |   5 +-
 .../authorization_uri_alterpart_loc.q  |   5 +-
 .../authorization_uri_altertab_setloc.q|   5 +-
 .../authorization_uri_create_table1.q  |   5 +-
 .../authorization_uri_create_table_ext.q   |   5 +-
 .../clientnegative/authorization_uri_createdb.q

[hive] 03/03: HIVE-22680: Replace Base64 in druid-handler Package (David Mollitor via Zoltan Haindrich)

2020-02-04 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit a21ca0e59967bff5570288fcc5b5b7e666e0e89a
Author: David Mollitor 
AuthorDate: Tue Feb 4 14:17:15 2020 +

HIVE-22680: Replace Base64 in druid-handler Package (David Mollitor via 
Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../org/apache/hadoop/hive/druid/security/DruidKerberosUtil.java| 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git 
a/druid-handler/src/java/org/apache/hadoop/hive/druid/security/DruidKerberosUtil.java
 
b/druid-handler/src/java/org/apache/hadoop/hive/druid/security/DruidKerberosUtil.java
index 8e10cd7..12603c1 100644
--- 
a/druid-handler/src/java/org/apache/hadoop/hive/druid/security/DruidKerberosUtil.java
+++ 
b/druid-handler/src/java/org/apache/hadoop/hive/druid/security/DruidKerberosUtil.java
@@ -18,7 +18,6 @@
 
 package org.apache.hadoop.hive.druid.security;
 
-import org.apache.commons.codec.binary.Base64;
 import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
 import 
org.apache.hadoop.security.authentication.client.AuthenticationException;
 import org.apache.hadoop.security.authentication.util.KerberosUtil;
@@ -33,7 +32,7 @@ import org.slf4j.LoggerFactory;
 import java.net.CookieStore;
 import java.net.HttpCookie;
 import java.net.URI;
-import java.nio.charset.StandardCharsets;
+import java.util.Base64;
 import java.util.List;
 import java.util.concurrent.locks.ReentrantLock;
 
@@ -42,7 +41,6 @@ import java.util.concurrent.locks.ReentrantLock;
  */
 public final class DruidKerberosUtil {
   protected static final Logger LOG = 
LoggerFactory.getLogger(DruidKerberosUtil.class);
-  private static final Base64 BASE_64_CODEC = new Base64(0);
   private static final ReentrantLock KERBEROS_LOCK = new ReentrantLock(true);
 
   private DruidKerberosUtil() {
@@ -78,7 +76,7 @@ public final class DruidKerberosUtil {
   gssContext.dispose();
   // Base64 encoded and stringified token for server
   LOG.debug("Got valid challenge for host {}", serverName);
-  return new String(BASE_64_CODEC.encode(outToken), 
StandardCharsets.US_ASCII);
+  return Base64.getEncoder().encodeToString(outToken);
 } catch (GSSException | IllegalAccessException | NoSuchFieldException | 
ClassNotFoundException e) {
   throw new AuthenticationException(e);
 } finally {



[hive] 02/03: HIVE-22801: Debug log is flooded with some debug dump stack (Zoltan Haindrich reviewed by Miklos Gergely)

2020-02-04 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 684d302d196dfde624d727c4d8270e2bc0ed4ca5
Author: Zoltan Haindrich 
AuthorDate: Tue Feb 4 14:08:10 2020 +

HIVE-22801: Debug log is flooded with some debug dump stack (Zoltan 
Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
index 80d2111..793d041 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
@@ -9682,7 +9682,11 @@ public class ObjectStore implements RawStore, 
Configurable {
 
   private void debugLog(final String message) {
 if (LOG.isDebugEnabled()) {
-  LOG.debug("{}", message, new Exception("Debug Dump Stack Trace (Not an 
Exception)"));
+  if (LOG.isTraceEnabled()) {
+LOG.debug("{}", message, new Exception("Debug Dump Stack Trace (Not an 
Exception)"));
+  } else {
+LOG.debug("{}", message);
+  }
 }
   }
 



[hive] branch master updated (06c5923 -> a21ca0e5)

2020-02-04 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 06c5923  HIVE-21215: Read Parquet INT64 timestamp (Marta Kuczora, 
reviewed by Karen Coppage and Peter Vary)
 new 7abc5f7  HIVE-22780: Upgrade slf4j version to 1.7.30 (David Lavati via 
Miklos Gergely)
 new 684d302  HIVE-22801: Debug log is flooded with some debug dump stack 
(Zoltan Haindrich reviewed by Miklos Gergely)
 new a21ca0e5 HIVE-22680: Replace Base64 in druid-handler Package (David 
Mollitor via Zoltan Haindrich)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/hadoop/hive/druid/security/DruidKerberosUtil.java| 6 ++
 itests/qtest-druid/pom.xml  | 2 +-
 itests/qtest/pom.xml| 2 +-
 kafka-handler/pom.xml   | 2 +-
 pom.xml | 2 +-
 .../src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java | 6 +-
 standalone-metastore/metastore-tools/pom.xml| 2 +-
 storage-api/pom.xml | 2 +-
 testutils/ptest2/pom.xml| 2 +-
 9 files changed, 14 insertions(+), 12 deletions(-)



[hive] 01/03: HIVE-22780: Upgrade slf4j version to 1.7.30 (David Lavati via Miklos Gergely)

2020-02-04 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 7abc5f71d56628064caf16a22f72c6674ef40df2
Author: David Lavati 
AuthorDate: Tue Feb 4 14:04:32 2020 +

HIVE-22780: Upgrade slf4j version to 1.7.30 (David Lavati via Miklos 
Gergely)

Signed-off-by: Zoltan Haindrich 
---
 itests/qtest-druid/pom.xml   | 2 +-
 itests/qtest/pom.xml | 2 +-
 kafka-handler/pom.xml| 2 +-
 pom.xml  | 2 +-
 standalone-metastore/metastore-tools/pom.xml | 2 +-
 storage-api/pom.xml  | 2 +-
 testutils/ptest2/pom.xml | 2 +-
 7 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/itests/qtest-druid/pom.xml b/itests/qtest-druid/pom.xml
index 05692c7..6da7273 100644
--- a/itests/qtest-druid/pom.xml
+++ b/itests/qtest-druid/pom.xml
@@ -44,7 +44,7 @@
 16.0.1
 4.1.0
 2.0.0
-1.7.25
+1.7.30
   
   
 
diff --git a/itests/qtest/pom.xml b/itests/qtest/pom.xml
index be8e377..f6fce77 100644
--- a/itests/qtest/pom.xml
+++ b/itests/qtest/pom.xml
@@ -39,7 +39,7 @@
 
 false
 -mkdir -p
-1.7.25
+1.7.30
   
 
   
diff --git a/kafka-handler/pom.xml b/kafka-handler/pom.xml
index a66a70a..6ad41de 100644
--- a/kafka-handler/pom.xml
+++ b/kafka-handler/pom.xml
@@ -115,7 +115,7 @@
 
   org.slf4j
   slf4j-api
-  1.7.25
+  1.7.30
   test
 
   
diff --git a/pom.xml b/pom.xml
index 2dd2128..2947a29 100644
--- a/pom.xml
+++ b/pom.xml
@@ -203,7 +203,7 @@
 1.5.6
 2.5.0
 1.0.1
-1.7.10
+1.7.30
 4.0.4
 2.7.0-SNAPSHOT
 0.9.1
diff --git a/standalone-metastore/metastore-tools/pom.xml 
b/standalone-metastore/metastore-tools/pom.xml
index 63f2369..d8c4788 100644
--- a/standalone-metastore/metastore-tools/pom.xml
+++ b/standalone-metastore/metastore-tools/pom.xml
@@ -103,7 +103,7 @@
   
 org.slf4j
 slf4j-log4j12
-1.7.25
+   1.7.30
   
   
   
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 61fdaf0..39b23b0 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -34,7 +34,7 @@
 19.0
 3.1.0
 4.11
-1.7.10
+1.7.30
 2.17
 ${basedir}/checkstyle/
   
diff --git a/testutils/ptest2/pom.xml b/testutils/ptest2/pom.xml
index 6d43056..cc04607 100644
--- a/testutils/ptest2/pom.xml
+++ b/testutils/ptest2/pom.xml
@@ -133,7 +133,7 @@ limitations under the License.
 
   org.slf4j
   slf4j-api
-  1.7.10
+  1.7.30
 
 
   org.springframework



[hive] branch master updated (7bb1d1e -> 8dec57c)

2020-01-27 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 7bb1d1e  HIVE-22518: SQLStdHiveAuthorizerFactoryForTest doesn't work 
correctly for llap tests (Zoltan Haindrich reviewed by Miklos Gergely)
 add 5201f14  HIVE-22774: Usability improvements of scheduled queries 
(Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)
 add 7a7be90  HIVE-22767: Beeline doesn't parse semicolons in comments 
properly (Zoltan Matyus via Zoltan Haindrich)
 add 8dec57c  HIVE-22679: Replace Base64 in metastore-common Package (David 
Mollitor via Naveen Gangam)

No new revisions were added by this update.

Summary of changes:
 .../src/java/org/apache/hive/beeline/Commands.java | 124 +
 .../test/org/apache/hive/beeline/TestCommands.java | 286 -
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |   2 +
 .../hcatalog/listener/DummyRawStoreFailEvent.java  |   2 +-
 .../hive/ql/parse/ScheduledQueryAnalyzer.java  |   5 +-
 .../scheduled/ScheduledQueryExecutionService.java  |  17 +-
 .../hive/ql/schq/TestScheduledQueryService.java|   2 +-
 .../hadoop/hive/metastore/api/QueryState.java  |   4 +-
 .../src/gen/thrift/gen-php/metastore/Types.php |   4 +-
 .../src/gen/thrift/gen-py/hive_metastore/ttypes.py |   6 +-
 .../src/gen/thrift/gen-rb/hive_metastore_types.rb  |   6 +-
 .../hadoop/hive/metastore/conf/MetastoreConf.java  |  12 +-
 .../metastore/security/HadoopThriftAuthBridge.java |   8 +-
 .../src/main/thrift/hive_metastore.thrift  |   2 +-
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  20 +-
 .../org/apache/hadoop/hive/metastore/RawStore.java |   2 +-
 .../ScheduledQueryExecutionsMaintTask.java |   4 +
 .../hadoop/hive/metastore/cache/CachedStore.java   |   2 +-
 .../metastore/DummyRawStoreControlledCommit.java   |   6 +-
 .../client/TestMetastoreScheduledQueries.java  |  57 +++-
 20 files changed, 475 insertions(+), 96 deletions(-)



[hive] branch master updated: HIVE-22518: SQLStdHiveAuthorizerFactoryForTest doesn't work correctly for llap tests (Zoltan Haindrich reviewed by Miklos Gergely)

2020-01-27 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 7bb1d1e  HIVE-22518: SQLStdHiveAuthorizerFactoryForTest doesn't work 
correctly for llap tests (Zoltan Haindrich reviewed by Miklos Gergely)
7bb1d1e is described below

commit 7bb1d1edfcba558958265ec47245bc529eaee2d8
Author: Zoltan Haindrich 
AuthorDate: Mon Jan 27 10:57:30 2020 +

HIVE-22518: SQLStdHiveAuthorizerFactoryForTest doesn't work correctly for 
llap tests (Zoltan Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 data/conf/llap/hive-site.xml   |  5 ++
 .../java/org/apache/hadoop/hive/ql/QTestUtil.java  |  2 +
 .../hive/ql/qoption/QTestAuthorizerHandler.java| 56 ++
 ql/src/test/queries/clientpositive/sysdb_schq.q|  6 ++-
 .../test/results/clientpositive/llap/sysdb.q.out   |  1 +
 .../results/clientpositive/llap/sysdb_schq.q.out   | 14 --
 6 files changed, 78 insertions(+), 6 deletions(-)

diff --git a/data/conf/llap/hive-site.xml b/data/conf/llap/hive-site.xml
index 0c5d030..d37c1b5 100644
--- a/data/conf/llap/hive-site.xml
+++ b/data/conf/llap/hive-site.xml
@@ -373,4 +373,9 @@
   
org.apache.hadoop.hive.ql.hooks.ScheduledQueryCreationRegistryHook
 
 
+
+  hive.users.in.admin.role
+  hive_admin_user
+
+
 
diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
index 217049a..c5624f2 100644
--- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
+++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
@@ -76,6 +76,7 @@ import 
org.apache.hadoop.hive.ql.processors.CommandProcessorException;
 import org.apache.hadoop.hive.ql.processors.CommandProcessorFactory;
 import org.apache.hadoop.hive.ql.processors.CommandProcessorResponse;
 import org.apache.hadoop.hive.ql.processors.HiveCommand;
+import org.apache.hadoop.hive.ql.qoption.QTestAuthorizerHandler;
 import org.apache.hadoop.hive.ql.qoption.QTestOptionDispatcher;
 import org.apache.hadoop.hive.ql.qoption.QTestReplaceHandler;
 import org.apache.hadoop.hive.ql.qoption.QTestSysDbHandler;
@@ -211,6 +212,7 @@ public class QTestUtil {
 testFiles = datasetHandler.getDataDir(conf);
 conf.set("test.data.dir", datasetHandler.getDataDir(conf));
 conf.setVar(ConfVars.HIVE_QUERY_RESULTS_CACHE_DIRECTORY, 
"/tmp/hive/_resultscache_" + ProcessUtils.getPid());
+dispatcher.register("authorizer", new QTestAuthorizerHandler());
 dispatcher.register("dataset", datasetHandler);
 dispatcher.register("replace", replaceHandler);
 dispatcher.register("sysdb", new QTestSysDbHandler());
diff --git 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestAuthorizerHandler.java
 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestAuthorizerHandler.java
new file mode 100644
index 000..c74f72c
--- /dev/null
+++ 
b/itests/util/src/main/java/org/apache/hadoop/hive/ql/qoption/QTestAuthorizerHandler.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.qoption;
+
+import org.apache.hadoop.hive.ql.QTestUtil;
+
+/**
+ * QTest authorizer option
+ *
+ * Enables authorization for the qtest.
+ *
+ * Example:
+ * --! qt:authorizer
+ */
+public class QTestAuthorizerHandler implements QTestOptionHandler {
+  private boolean enabled;
+
+  @Override
+  public void processArguments(String arguments) {
+enabled = true;
+  }
+
+  @Override
+  public void beforeTest(QTestUtil qt) throws Exception {
+if (enabled) {
+  qt.getConf().set("hive.test.authz.sstd.hs2.mode", "true");
+  qt.getConf().set("hive.security.authorization.manager",
+  
"org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest");
+  qt.getConf().set("hive.security

[hive] branch master updated (a344426 -> 037eace)

2020-01-24 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from a344426  HIVE-22712: ReExec Driver execute submit the query in default 
queue irrespective of user defined queue (Rajkumar Singh via Zoltan Haindrich)
 add 037eace  HIVE-22653: Remove commons-lang leftovers (David Lavati via 
Ashutosh Chauhan, Zoltan Haindrich)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hive/accumulo/AccumuloHiveRow.java  |  2 +-
 .../hadoop/hive/accumulo/columns/ColumnMapper.java |  2 +-
 .../columns/HiveAccumuloMapColumnMapping.java  |  2 +-
 .../hive/accumulo/columns/TestColumnMapper.java|  2 +-
 .../serde/FirstCharAccumuloCompositeRowId.java |  2 +-
 .../hive/beeline/SeparatedValuesOutputFormat.java  |  4 +--
 .../java/org/apache/hadoop/hive/cli/CliDriver.java |  2 +-
 .../org/apache/hadoop/hive/common/FileUtils.java   |  2 +-
 .../hadoop/hive/common/cli/HiveFileProcessor.java  |  2 +-
 .../format/datetime/HiveSqlDateTimeFormatter.java  |  5 ++--
 .../hadoop/hive/common/log/InPlaceUpdate.java  |  2 +-
 .../hadoop/hive/common/type/HiveBaseChar.java  |  2 +-
 .../apache/hadoop/hive/common/type/HiveChar.java   |  2 +-
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |  2 +-
 .../org/apache/hadoop/hive/conf/HiveConfUtil.java  |  2 +-
 .../apache/hive/common/util/HiveStringUtils.java   |  2 +-
 .../src/java/org/apache/hive/http/HttpServer.java  |  2 +-
 .../hive/druid/DruidStorageHandlerUtils.java   |  2 +-
 .../hadoop/hive/druid/io/DruidOutputFormat.java|  2 +-
 .../apache/hadoop/hive/hbase/ColumnMappings.java   |  2 +-
 .../hadoop/hive/hbase/HiveHFileOutputFormat.java   |  2 +-
 .../java/org/apache/hive/hcatalog/cli/HCatCli.java |  2 +-
 .../cli/SemanticAnalysis/CreateTableHook.java  |  4 +--
 .../org/apache/hive/hcatalog/common/HCatUtil.java  |  2 +-
 .../hive/hcatalog/common/HiveClientCache.java  |  4 +--
 .../hive/hcatalog/data/schema/HCatFieldSchema.java |  2 +-
 .../mapreduce/FileOutputCommitterContainer.java|  2 +-
 .../hcatalog/mapreduce/HCatBaseInputFormat.java|  2 +-
 .../hive/hcatalog/mapreduce/MultiOutputFormat.java |  2 +-
 .../mapreduce/TaskCommitContextRegistry.java   |  2 +-
 .../apache/hive/hcatalog/pig/HCatBaseStorer.java   |  2 +-
 .../hcatalog/messaging/jms/MessagingUtils.java |  2 +-
 .../hive/hcatalog/streaming/StrictRegexWriter.java |  2 +-
 .../hive/hcatalog/api/HCatClientHMSImpl.java   |  2 +-
 .../org/apache/hive/hcatalog/api/HCatTable.java|  2 +-
 .../hive/hcatalog/templeton/HcatDelegator.java |  2 +-
 .../src/main/java/org/apache/hive/hplsql/Copy.java |  2 +-
 .../org/apache/hive/hplsql/functions/Function.java |  2 +-
 .../hive/hplsql/functions/FunctionDatetime.java|  2 +-
 .../hadoop/hive/ql/parse/WarehouseInstance.java|  2 +-
 .../ql/session/TestClearDanglingScratchDir.java|  2 +-
 .../apache/hive/beeline/TestBeeLineWithArgs.java   |  2 +-
 .../hive/beeline/schematool/TestSchemaTool.java|  2 +-
 .../org/apache/hive/jdbc/TestJdbcWithMiniHS2.java  |  2 +-
 .../hadoop/hive/cli/control/AbstractCliConfig.java |  2 +-
 .../hadoop/hive/cli/control/CoreBeeLineDriver.java |  2 +-
 .../apache/hadoop/hive/hbase/HBaseTestSetup.java   |  2 +-
 .../hadoop/hive/ql/QTestResultProcessor.java   |  2 +-
 .../org/apache/hadoop/hive/ql/QTestSyntaxUtil.java |  2 +-
 .../java/org/apache/hadoop/hive/ql/QTestUtil.java  |  2 +-
 .../hive/ql/hooks/CheckColumnAccessHook.java   |  2 +-
 .../hadoop/hive/ql/hooks/CheckTableAccessHook.java |  2 +-
 .../hooks/VerifySessionStateStackTracesHook.java   |  2 +-
 .../main/java/org/apache/hive/beeline/QFile.java   |  2 +-
 .../java/org/apache/hive/jdbc/HiveConnection.java  |  2 +-
 .../java/org/apache/hive/jdbc/HiveStatement.java   |  2 +-
 .../hive/llap/security/LlapTokenIdentifier.java|  2 +-
 .../test/org/apache/hadoop/hive/llap/TestRow.java  |  2 +-
 .../hive/llap/cache/LowLevelLrfuCachePolicy.java   |  2 +-
 .../hive/llap/daemon/impl/LlapTaskReporter.java|  2 +-
 .../llap/tezplugins/LlapTaskSchedulerService.java  |  2 +-
 .../hadoop/hive/metastore/HiveClientCache.java |  4 +--
 .../hive/metastore/SerDeStorageSchemaReader.java   |  4 +--
 pom.xml| 13 +
 ql/pom.xml |  5 
 ql/src/java/org/apache/hadoop/hive/ql/Context.java |  2 +-
 .../org/apache/hadoop/hive/ql/ddl/DDLUtils.java|  2 +-
 .../AlterDatabaseSetLocationOperation.java |  2 +-
 .../ddl/function/desc/DescFunctionOperation.java   |  2 +-
 .../ql/ddl/table/AbstractAlterTableOperation.java  |  2 +-
 .../create/show/ShowCreateTableOperation.java  |  2 +-
 .../hive/ql/ddl/table/info/DescTableOperation.java |  2 +-
 .../misc/AlterTableSetPropertiesOperation.java |  2 +-
 .../org/apache/hadoop/hive/ql/debug/Utils.java |  2 +-
 .../apache/hadoop/hive/ql

[hive] 03/03: HIVE-22712: ReExec Driver execute submit the query in default queue irrespective of user defined queue (Rajkumar Singh via Zoltan Haindrich)

2020-01-24 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit a344426b37e4ac21166c43ab2fee8cea1e45b30a
Author: Rajkumar Singh 
AuthorDate: Fri Jan 24 16:06:00 2020 +

HIVE-22712: ReExec Driver execute submit the query in default queue 
irrespective of user defined queue (Rajkumar Singh via Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 
---
 .../hive/ql/reexec/ReExecutionOverlayPlugin.java   |  7 ++
 ql/src/test/queries/clientpositive/retry_failure.q |  3 +++
 .../clientpositive/llap/retry_failure.q.out| 25 ++
 3 files changed, 35 insertions(+)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/reexec/ReExecutionOverlayPlugin.java 
b/ql/src/java/org/apache/hadoop/hive/ql/reexec/ReExecutionOverlayPlugin.java
index 50803cc..83df334 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/reexec/ReExecutionOverlayPlugin.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/reexec/ReExecutionOverlayPlugin.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext;
 import org.apache.hadoop.hive.ql.hooks.HookContext;
 import org.apache.hadoop.hive.ql.hooks.HookContext.HookType;
 import org.apache.hadoop.hive.ql.plan.mapper.PlanMapper;
+import org.apache.tez.dag.api.TezConfiguration;
 
 /**
  * Re-Executes a query only adding an extra overlay
@@ -55,6 +56,12 @@ public class ReExecutionOverlayPlugin implements 
IReExecutionPlugin {
 this.driver = driver;
 driver.getHookRunner().addOnFailureHook(new LocalHook());
 HiveConf conf = driver.getConf();
+// we unset the queue name intentionally in 
TezSessionState#startSessionAndContainers
+// as a result reexec create new session in the default queue and create 
problem
+String queueName = conf.get(TezConfiguration.TEZ_QUEUE_NAME);
+if (queueName != null) {
+  conf.set("reexec.overlay.tez.queue.name", queueName);
+}
 subtree = conf.subtree("reexec.overlay");
   }
 
diff --git a/ql/src/test/queries/clientpositive/retry_failure.q 
b/ql/src/test/queries/clientpositive/retry_failure.q
index ad12ecd..b1bc789 100644
--- a/ql/src/test/queries/clientpositive/retry_failure.q
+++ b/ql/src/test/queries/clientpositive/retry_failure.q
@@ -9,5 +9,8 @@ set reexec.overlay.zzz=2;
 
 set hive.query.reexecution.enabled=true;
 set hive.query.reexecution.strategies=overlay;
+set hive.fetch.task.conversion=none;
+set tez.queue.name=default;
 
 select assert_true(${hiveconf:zzz} > a) from tx_n1 group by a;
+select assert_true(${hiveconf:zzz} > a), 
assert_true("${hiveconf:tez.queue.name}" = "default") from tx_n1;
diff --git a/ql/src/test/results/clientpositive/llap/retry_failure.q.out 
b/ql/src/test/results/clientpositive/llap/retry_failure.q.out
index b0a153d..59d854a 100644
--- a/ql/src/test/results/clientpositive/llap/retry_failure.q.out
+++ b/ql/src/test/results/clientpositive/llap/retry_failure.q.out
@@ -41,3 +41,28 @@ POSTHOOK: type: QUERY
 POSTHOOK: Input: default@tx_n1
  A masked pattern was here 
 NULL
+PREHOOK: query: select assert_true(1 > a), assert_true("default" = "default") 
from tx_n1
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tx_n1
+ A masked pattern was here 
+Status: Failed
+Vertex failed, vertexName=Map 1, vertexId=vertex_#ID#, diagnostics=[Task 
failed, taskId=task_#ID#, diagnostics=[TaskAttempt 0 failed, info=[Error: Error 
while running task ( failure ) : attempt_#ID#:java.lang.RuntimeException: 
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
Hive Runtime Error while processing row
+ A masked pattern was here 
+], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : 
attempt_#ID#:java.lang.RuntimeException: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row
+ A masked pattern was here 
+]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 
killedTasks:0, Vertex vertex_#ID# [Map 1] killed/failed due to:OWN_TASK_FAILURE]
+DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
+FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
vertexId=vertex_#ID#, diagnostics=[Task failed, taskId=task_#ID#, 
diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( 
failure ) : attempt_#ID#:java.lang.RuntimeException: 
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
Hive Runtime Error while processing row
+ A masked pattern was here 
+], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : 
attempt_#ID#:java.lang.RuntimeException: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row
+#

[hive] 01/03: HIVE-22706: Jdbc storage handler incorrectly interprets boolean column value in derby (Zoltan Haindrich reviewed by Syed Shameerur Rahman, Miklos Gergely)

2020-01-24 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit dc7d146bb9fcee08a6e06a6bb25d0f9d13a6dbdf
Author: Zoltan Haindrich 
AuthorDate: Fri Jan 24 15:24:23 2020 +

HIVE-22706: Jdbc storage handler incorrectly interprets boolean column 
value in derby (Zoltan Haindrich reviewed by Syed Shameerur Rahman, Miklos 
Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../apache/hive/storage/jdbc/DBRecordWritable.java |  9 +++-
 .../hive/storage/jdbc/dao/JdbcRecordIterator.java  |  8 ++-
 ql/src/test/queries/clientpositive/jdbc_handler.q  | 20 
 .../results/clientpositive/llap/jdbc_handler.q.out | 59 --
 .../results/clientpositive/llap/sysdb_schq.q.out   |  2 +-
 5 files changed, 59 insertions(+), 39 deletions(-)

diff --git 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/DBRecordWritable.java 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/DBRecordWritable.java
index b062aa3..77abae9 100644
--- 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/DBRecordWritable.java
+++ 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/DBRecordWritable.java
@@ -20,9 +20,11 @@ package org.apache.hive.storage.jdbc;
 import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
+import java.sql.ParameterMetaData;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Types;
 import java.util.Arrays;
 import org.apache.hadoop.io.Writable;
 
@@ -59,8 +61,13 @@ public class DBRecordWritable implements Writable,
 if (columnValues == null) {
   throw new SQLException("No data available to be written");
 }
+ParameterMetaData parameterMetaData = statement.getParameterMetaData();
 for (int i = 0; i < columnValues.length; i++) {
-  statement.setObject(i + 1, columnValues[i]);
+  Object value = columnValues[i];
+  if ((parameterMetaData.getParameterType(i + 1) == Types.CHAR) && value 
!= null && value instanceof Boolean) {
+value = ((Boolean) value).booleanValue() ? "1" : "0";
+  }
+  statement.setObject(i + 1, value);
 }
   }
 
diff --git 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/JdbcRecordIterator.java
 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/JdbcRecordIterator.java
index dbc8453..cd7cd4f 100644
--- 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/JdbcRecordIterator.java
+++ 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/JdbcRecordIterator.java
@@ -30,6 +30,7 @@ import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLDataException;
 import java.sql.SQLException;
+import java.sql.Types;
 import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
@@ -109,7 +110,12 @@ public class JdbcRecordIterator implements 
Iterator> {
   value = rs.getBigDecimal(i + 1);
   break;
 case BOOLEAN:
-  value = rs.getBoolean(i + 1);
+  boolean b = rs.getBoolean(i + 1);
+  if (b && rs.getMetaData().getColumnType(i + 1) == Types.CHAR) {
+// also accept Y/N in case of CHAR(1) - datanucleus stores 
booleans in CHAR(1) fields for derby 
+b = !"N".equals(rs.getString(i + 1));
+  }
+  value = b;
   break;
 case CHAR:
 case VARCHAR:
diff --git a/ql/src/test/queries/clientpositive/jdbc_handler.q 
b/ql/src/test/queries/clientpositive/jdbc_handler.q
index 2c7e3fd..f2eba04 100644
--- a/ql/src/test/queries/clientpositive/jdbc_handler.q
+++ b/ql/src/test/queries/clientpositive/jdbc_handler.q
@@ -98,7 +98,7 @@ FROM src
 
 SELECT dboutput ( 
'jdbc:derby:;databaseName=${system:test.tmp.dir}/test_insert_derby_as_external_table_db;create=true','','',
 'CREATE TABLE INSERT_TO_DERBY_TABLE (a BOOLEAN, b  INTEGER, c BIGINT, d FLOAT, 
e DOUBLE, f DATE, g VARCHAR(27),
-  h VARCHAR(27), i CHAR(2), j TIMESTAMP, k 
DECIMAL(5,4), l SMALLINT, m SMALLINT)' )
+  h VARCHAR(27), i CHAR(2), j TIMESTAMP, k 
DECIMAL(5,4), l SMALLINT, m SMALLINT, b1 CHAR(10))' )
 
 limit 1;
 
@@ -116,7 +116,8 @@ CREATE EXTERNAL TABLE insert_to_ext_derby_table
  j TIMESTAMP,
  k DECIMAL(5,4),
  l TINYINT,
- m SMALLINT
+ m SMALLINT,
+ b1 BOOLEAN
  )
 STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
 TBLPROPERTIES (
@@ -143,24 +144,25 @@ CREATE TABLE test_insert_tbl
  j TIMESTAMP,
  k DECIMAL(5,4),
  l TINYINT,
- m SMALLINT
+ m SMALLINT,
+ b1 BOOLEAN
  );
 
-INSERT INTO test_insert_tbl VALUES(true, 342, 8900, 9.63, 1099., 
'2019-04-11', 'abcd', 'efgh', 'k', '2019-05-

[hive] branch master updated (2d444fa -> a344426)

2020-01-24 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 2d444fa  HIVE-20801: ACID: Allow DbTxnManager to ignore non-ACID table 
read locking (Gopal Vijayaraghavan, reviewed by Eugene Koifman, Ashutosh 
Chauhan, Denys Kuzmenko)
 new dc7d146  HIVE-22706: Jdbc storage handler incorrectly interprets 
boolean column value in derby (Zoltan Haindrich reviewed by Syed Shameerur 
Rahman, Miklos Gergely)
 new 2676818  HIVE-22761: Scheduled query executor fails to report query 
state as errored if session initialization fails (Zoltan Haindrich reviewed by 
Miklos Gergely)
 new a344426  HIVE-22712: ReExec Driver execute submit the query in default 
queue irrespective of user defined queue (Rajkumar Singh via Zoltan Haindrich)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/hive/storage/jdbc/DBRecordWritable.java |  9 +++-
 .../hive/storage/jdbc/dao/JdbcRecordIterator.java  |  8 ++-
 .../hive/ql/reexec/ReExecutionOverlayPlugin.java   |  7 +++
 .../scheduled/ScheduledQueryExecutionService.java  |  6 +--
 ql/src/test/queries/clientpositive/jdbc_handler.q  | 20 
 ql/src/test/queries/clientpositive/retry_failure.q |  3 ++
 .../results/clientpositive/llap/jdbc_handler.q.out | 59 --
 .../clientpositive/llap/retry_failure.q.out| 25 +
 .../results/clientpositive/llap/sysdb_schq.q.out   |  2 +-
 9 files changed, 97 insertions(+), 42 deletions(-)



[hive] 02/03: HIVE-22761: Scheduled query executor fails to report query state as errored if session initialization fails (Zoltan Haindrich reviewed by Miklos Gergely)

2020-01-24 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 2676818851484b4c6f36937309c3c8fa98e98e5b
Author: Zoltan Haindrich 
AuthorDate: Fri Jan 24 15:24:27 2020 +

HIVE-22761: Scheduled query executor fails to report query state as errored 
if session initialization fails (Zoltan Haindrich reviewed by Miklos Gergely)

Signed-off-by: Zoltan Haindrich 
---
 .../hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java| 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
index 48bdc97..813f3af 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/scheduled/ScheduledQueryExecutionService.java
@@ -106,6 +106,9 @@ public class ScheduledQueryExecutionService implements 
Closeable {
 
 private void processQuery(ScheduledQueryPollResponse q) {
   SessionState state = null;
+  info = new ScheduledQueryProgressInfo();
+  info.setScheduledExecutionId(q.getExecutionId());
+  info.setState(QueryState.EXECUTING);
   try {
 HiveConf conf = new HiveConf(context.conf);
 conf.set(Constants.HIVE_QUERY_EXCLUSIVE_LOCK, 
lockNameFor(q.getScheduleKey()));
@@ -113,9 +116,6 @@ public class ScheduledQueryExecutionService implements 
Closeable {
 conf.unset(HiveConf.ConfVars.HIVESESSIONID.varname);
 state = new SessionState(conf, q.getUser());
 SessionState.start(state);
-info = new ScheduledQueryProgressInfo();
-info.setScheduledExecutionId(q.getExecutionId());
-info.setState(QueryState.EXECUTING);
 reportQueryProgress();
 try (
   IDriver driver = 
DriverFactory.newDriver(DriverFactory.getNewQueryState(conf), null)) {



[hive] branch branch-3 updated: HIVE-22704: Distribution package incorrectly ships the upgrade.order files from the metastore module (Zoltan Haindrich reviewed by Naveen Gangam)

2020-01-21 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-3 by this push:
 new 0571341  HIVE-22704: Distribution package incorrectly ships the 
upgrade.order files from the metastore module (Zoltan Haindrich reviewed by 
Naveen Gangam)
0571341 is described below

commit 0571341dad8b11ce777416b5a636c35bc00969ba
Author: Zoltan Haindrich 
AuthorDate: Mon Jan 13 15:13:25 2020 +

HIVE-22704: Distribution package incorrectly ships the upgrade.order files 
from the metastore module (Zoltan Haindrich reviewed by Naveen Gangam)

Signed-off-by: Zoltan Haindrich 
(cherry picked from commit 36add83194655dd5f6489100ddee18014c69349b)
---
 metastore/scripts/upgrade/derby/upgrade.order.derby | 17 -
 metastore/scripts/upgrade/mssql/upgrade.order.mssql | 11 ---
 metastore/scripts/upgrade/mysql/upgrade.order.mysql | 17 -
 metastore/scripts/upgrade/oracle/upgrade.order.oracle   | 13 -
 .../scripts/upgrade/postgres/upgrade.order.postgres | 17 -
 packaging/src/main/assembly/bin.xml |  4 
 6 files changed, 79 deletions(-)

diff --git a/metastore/scripts/upgrade/derby/upgrade.order.derby 
b/metastore/scripts/upgrade/derby/upgrade.order.derby
deleted file mode 100644
index f43da9a..000
--- a/metastore/scripts/upgrade/derby/upgrade.order.derby
+++ /dev/null
@@ -1,17 +0,0 @@
-0.5.0-to-0.6.0
-0.6.0-to-0.7.0
-0.7.0-to-0.8.0
-0.8.0-to-0.9.0
-0.9.0-to-0.10.0
-0.10.0-to-0.11.0
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/metastore/scripts/upgrade/mssql/upgrade.order.mssql 
b/metastore/scripts/upgrade/mssql/upgrade.order.mssql
deleted file mode 100644
index 5572c26..000
--- a/metastore/scripts/upgrade/mssql/upgrade.order.mssql
+++ /dev/null
@@ -1,11 +0,0 @@
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/metastore/scripts/upgrade/mysql/upgrade.order.mysql 
b/metastore/scripts/upgrade/mysql/upgrade.order.mysql
deleted file mode 100644
index f43da9a..000
--- a/metastore/scripts/upgrade/mysql/upgrade.order.mysql
+++ /dev/null
@@ -1,17 +0,0 @@
-0.5.0-to-0.6.0
-0.6.0-to-0.7.0
-0.7.0-to-0.8.0
-0.8.0-to-0.9.0
-0.9.0-to-0.10.0
-0.10.0-to-0.11.0
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/metastore/scripts/upgrade/oracle/upgrade.order.oracle 
b/metastore/scripts/upgrade/oracle/upgrade.order.oracle
deleted file mode 100644
index 72b8303..000
--- a/metastore/scripts/upgrade/oracle/upgrade.order.oracle
+++ /dev/null
@@ -1,13 +0,0 @@
-0.9.0-to-0.10.0
-0.10.0-to-0.11.0
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/metastore/scripts/upgrade/postgres/upgrade.order.postgres 
b/metastore/scripts/upgrade/postgres/upgrade.order.postgres
deleted file mode 100644
index f43da9a..000
--- a/metastore/scripts/upgrade/postgres/upgrade.order.postgres
+++ /dev/null
@@ -1,17 +0,0 @@
-0.5.0-to-0.6.0
-0.6.0-to-0.7.0
-0.7.0-to-0.8.0
-0.8.0-to-0.9.0
-0.9.0-to-0.10.0
-0.10.0-to-0.11.0
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/packaging/src/main/assembly/bin.xml 
b/packaging/src/main/assembly/bin.xml
index 2dd9260..6b2d678 100644
--- a/packaging/src/main/assembly/bin.xml
+++ b/packaging/src/main/assembly/bin.xml
@@ -214,10 +214,6 @@
   
 **/*
   
-  
-
-%regex[(!hive)/upgrade.order.*]
-  
   scripts/metastore/upgrade
 
 



[hive] branch branch-3.1 updated: HIVE-22704: Distribution package incorrectly ships the upgrade.order files from the metastore module (Zoltan Haindrich reviewed by Naveen Gangam)

2020-01-21 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 60fdf3d  HIVE-22704: Distribution package incorrectly ships the 
upgrade.order files from the metastore module (Zoltan Haindrich reviewed by 
Naveen Gangam)
60fdf3d is described below

commit 60fdf3d331c7ee6371ec63e60723da09f9f2428e
Author: Zoltan Haindrich 
AuthorDate: Mon Jan 13 15:13:25 2020 +

HIVE-22704: Distribution package incorrectly ships the upgrade.order files 
from the metastore module (Zoltan Haindrich reviewed by Naveen Gangam)

Signed-off-by: Zoltan Haindrich 
(cherry picked from commit 36add83194655dd5f6489100ddee18014c69349b)
---
 metastore/scripts/upgrade/derby/upgrade.order.derby | 17 -
 metastore/scripts/upgrade/mssql/upgrade.order.mssql | 11 ---
 metastore/scripts/upgrade/mysql/upgrade.order.mysql | 17 -
 metastore/scripts/upgrade/oracle/upgrade.order.oracle   | 13 -
 .../scripts/upgrade/postgres/upgrade.order.postgres | 17 -
 packaging/src/main/assembly/bin.xml |  4 
 6 files changed, 79 deletions(-)

diff --git a/metastore/scripts/upgrade/derby/upgrade.order.derby 
b/metastore/scripts/upgrade/derby/upgrade.order.derby
deleted file mode 100644
index f43da9a..000
--- a/metastore/scripts/upgrade/derby/upgrade.order.derby
+++ /dev/null
@@ -1,17 +0,0 @@
-0.5.0-to-0.6.0
-0.6.0-to-0.7.0
-0.7.0-to-0.8.0
-0.8.0-to-0.9.0
-0.9.0-to-0.10.0
-0.10.0-to-0.11.0
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/metastore/scripts/upgrade/mssql/upgrade.order.mssql 
b/metastore/scripts/upgrade/mssql/upgrade.order.mssql
deleted file mode 100644
index 5572c26..000
--- a/metastore/scripts/upgrade/mssql/upgrade.order.mssql
+++ /dev/null
@@ -1,11 +0,0 @@
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/metastore/scripts/upgrade/mysql/upgrade.order.mysql 
b/metastore/scripts/upgrade/mysql/upgrade.order.mysql
deleted file mode 100644
index f43da9a..000
--- a/metastore/scripts/upgrade/mysql/upgrade.order.mysql
+++ /dev/null
@@ -1,17 +0,0 @@
-0.5.0-to-0.6.0
-0.6.0-to-0.7.0
-0.7.0-to-0.8.0
-0.8.0-to-0.9.0
-0.9.0-to-0.10.0
-0.10.0-to-0.11.0
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/metastore/scripts/upgrade/oracle/upgrade.order.oracle 
b/metastore/scripts/upgrade/oracle/upgrade.order.oracle
deleted file mode 100644
index 72b8303..000
--- a/metastore/scripts/upgrade/oracle/upgrade.order.oracle
+++ /dev/null
@@ -1,13 +0,0 @@
-0.9.0-to-0.10.0
-0.10.0-to-0.11.0
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/metastore/scripts/upgrade/postgres/upgrade.order.postgres 
b/metastore/scripts/upgrade/postgres/upgrade.order.postgres
deleted file mode 100644
index f43da9a..000
--- a/metastore/scripts/upgrade/postgres/upgrade.order.postgres
+++ /dev/null
@@ -1,17 +0,0 @@
-0.5.0-to-0.6.0
-0.6.0-to-0.7.0
-0.7.0-to-0.8.0
-0.8.0-to-0.9.0
-0.9.0-to-0.10.0
-0.10.0-to-0.11.0
-0.11.0-to-0.12.0
-0.12.0-to-0.13.0
-0.13.0-to-0.14.0
-0.14.0-to-1.1.0
-1.1.0-to-1.2.0
-1.2.0-to-2.0.0
-2.0.0-to-2.1.0
-2.1.0-to-2.2.0
-2.2.0-to-2.3.0
-2.3.0-to-3.0.0
-3.0.0-to-3.1.0
diff --git a/packaging/src/main/assembly/bin.xml 
b/packaging/src/main/assembly/bin.xml
index 2dd9260..6b2d678 100644
--- a/packaging/src/main/assembly/bin.xml
+++ b/packaging/src/main/assembly/bin.xml
@@ -214,10 +214,6 @@
   
 **/*
   
-  
-
-%regex[(!hive)/upgrade.order.*]
-  
   scripts/metastore/upgrade
 
 



<    1   2   3   4   5   6   7   8   9   10   >