[hive] branch master updated: HIVE-26806: Increase (temporarily) Jenkins executor timeout to allow tests to pass (Stamatis Zampetakis)

2022-12-02 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ffdebaf67dd HIVE-26806: Increase (temporarily) Jenkins executor 
timeout to allow tests to pass (Stamatis Zampetakis)
ffdebaf67dd is described below

commit ffdebaf67ddc96463ea5b69d21fa376b70ef08b3
Author: Stamatis Zampetakis 
AuthorDate: Fri Dec 2 22:36:35 2022 +0100

HIVE-26806: Increase (temporarily) Jenkins executor timeout to allow tests 
to pass (Stamatis Zampetakis)

Hopefully after getting a successful run the Parallel Test Executor
plugin will rebalance the tests correctly in subsequent jobs and we
will be able to restore the timeout to the old value.
---
 Jenkinsfile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index f4eb2c89f52..2c551a21568 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -67,7 +67,7 @@ setPrLabel("PENDING");
 
 def executorNode(run) {
   hdbPodTemplate {
-timeout(time: 6, unit: 'HOURS') {
+timeout(time: 12, unit: 'HOURS') {
   node(POD_LABEL) {
 container('hdb') {
   run()



[calcite] branch main updated: [CALCITE-5332] Facilitate PruneEmptyRules configuration by adding DEFAULT instances

2022-11-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new 085d65ac99 [CALCITE-5332] Facilitate PruneEmptyRules configuration by 
adding DEFAULT instances
085d65ac99 is described below

commit 085d65ac99f0eb87fcd8afce56192de92fd51c0d
Author: Stamatis Zampetakis 
AuthorDate: Thu Oct 13 15:49:35 2022 +0200

[CALCITE-5332] Facilitate PruneEmptyRules configuration by adding DEFAULT 
instances

Creating new PruneEmptyRule instances with slightly different
configurations (e.g., modify a rule to match FooProject.class instead
of Project.class) is almost impossible for the following reasons:

* ImmutableXXConfig classes are package private;
* Constructors in PruneEmptyRules class are either deprecated or
package private;
* Existing configurations do not provide DEFAULT instances;
* Configuration cannot be obtained from existing rules cause the latter
are declared as RelOptRule (and not RelRule).

Add DEFAULT configuration instances for each rule variant to provide
users an achor point to modify the behavior of a rule and adhere to the
RelRule interface, which requires all configs to have a DEFAULT
instance.

Close apache/calcite#2937
---
 .../apache/calcite/rel/rules/PruneEmptyRules.java  | 129 +++--
 1 file changed, 65 insertions(+), 64 deletions(-)

diff --git 
a/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java 
b/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java
index 34780e1cf3..acfafc1224 100644
--- a/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java
+++ b/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java
@@ -94,14 +94,7 @@ public abstract class PruneEmptyRules {
* Union(Empty, Empty) becomes Empty
* 
*/
-  public static final RelOptRule UNION_INSTANCE =
-  ImmutableUnionEmptyPruneRuleConfig.of()
-  .withOperandSupplier(b0 ->
-  b0.operand(Union.class).unorderedInputs(b1 ->
-  b1.operand(Values.class)
-  .predicate(Values::isEmpty).noInputs()))
-  .withDescription("Union")
-  .toRule();
+  public static final RelOptRule UNION_INSTANCE = 
UnionEmptyPruneRuleConfig.DEFAULT.toRule();
 
 
   /**
@@ -115,14 +108,7 @@ public abstract class PruneEmptyRules {
* Minus(Empty, Rel) becomes Empty
* 
*/
-  public static final RelOptRule MINUS_INSTANCE =
-  ImmutableMinusEmptyPruneRuleConfig.of()
-  .withOperandSupplier(b0 ->
-  b0.operand(Minus.class).unorderedInputs(b1 ->
-  b1.operand(Values.class).predicate(Values::isEmpty)
-  .noInputs()))
-  .withDescription("Minus")
-  .toRule();
+  public static final RelOptRule MINUS_INSTANCE = 
MinusEmptyPruneRuleConfig.DEFAULT.toRule();
 
   /**
* Rule that converts a
@@ -137,13 +123,7 @@ public abstract class PruneEmptyRules {
* 
*/
   public static final RelOptRule INTERSECT_INSTANCE =
-  ImmutableIntersectEmptyPruneRuleConfig.of()
-  .withOperandSupplier(b0 ->
-  b0.operand(Intersect.class).unorderedInputs(b1 ->
-  b1.operand(Values.class).predicate(Values::isEmpty)
-  .noInputs()))
-  .withDescription("Intersect")
-  .toRule();
+  IntersectEmptyPruneRuleConfig.DEFAULT.toRule();
 
   private static boolean isEmpty(RelNode node) {
 if (node instanceof Values) {
@@ -174,10 +154,7 @@ public abstract class PruneEmptyRules {
* the table is empty or not.
*/
   public static final RelOptRule EMPTY_TABLE_INSTANCE =
-  ImmutableZeroMaxRowsRuleConfig.of()
-  .withOperandSupplier(b0 -> b0.operand(TableScan.class).noInputs())
-  .withDescription("PruneZeroRowsTable")
-  .toRule();
+  ImmutableZeroMaxRowsRuleConfig.DEFAULT.toRule();
 
   /**
* Rule that converts a {@link org.apache.calcite.rel.core.Project}
@@ -190,10 +167,7 @@ public abstract class PruneEmptyRules {
* 
*/
   public static final RelOptRule PROJECT_INSTANCE =
-  ImmutableRemoveEmptySingleRuleConfig.of()
-  .withDescription("PruneEmptyProject")
-  .withOperandFor(Project.class, project -> true)
-  .toRule();
+  RemoveEmptySingleRule.RemoveEmptySingleRuleConfig.PROJECT.toRule();
 
   /**
* Rule that converts a {@link org.apache.calcite.rel.core.Filter}
@@ -206,10 +180,7 @@ public abstract class PruneEmptyRules {
* 
*/
   public static final RelOptRule FILTER_INSTANCE =
-  ImmutableRemoveEmptySingleRuleConfig.of()
-  .withDescription("PruneEmptyFilter&q

[hive] branch master updated: HIVE-25723: Typos in DateUtils, pom.xml (fengpan0403, NeverLanded, reviewed by Stamatis Zampetakis)

2022-11-24 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 5d64f650f85 HIVE-25723: Typos in DateUtils, pom.xml (fengpan0403, 
NeverLanded, reviewed by Stamatis Zampetakis)
5d64f650f85 is described below

commit 5d64f650f852931ddd6a873008308edf894262c0
Author: fengpan0403 
AuthorDate: Thu Nov 18 16:01:04 2021 +0800

HIVE-25723: Typos in DateUtils, pom.xml (fengpan0403, NeverLanded, reviewed 
by Stamatis Zampetakis)

Co-authored-by: fengpan0403 
Co-authored-by: Cipher 

Closes #3704
Closes #2800
---
 common/src/java/org/apache/hive/common/util/DateUtils.java | 2 +-
 pom.xml| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/common/src/java/org/apache/hive/common/util/DateUtils.java 
b/common/src/java/org/apache/hive/common/util/DateUtils.java
index e70de289b4b..cc023011a6f 100644
--- a/common/src/java/org/apache/hive/common/util/DateUtils.java
+++ b/common/src/java/org/apache/hive/common/util/DateUtils.java
@@ -73,7 +73,7 @@ public class DateUtils {
* @param field the calendar field
* @return the calendar field name
* @exception IndexOutOfBoundsException if field is negative,
-   * equal to or greater then FIELD_COUNT.
+   * equal to or greater than FIELD_COUNT.
*/
   public static String getFieldName(int field) {
   return FIELD_NAME[field];
diff --git a/pom.xml b/pom.xml
index bfdbea6497c..32d8d553db1 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1670,7 +1670,7 @@
 
${test.log4j.scheme}${test.conf.dir}/hive-log4j2.properties
 
${test.console.log.level}
 true
-
+
 ${test.tmp.dir}
 
 ${test.tmp.dir}



[hive] branch master updated: HIVE-21599: Wrong results for partitioned Parquet table when files contain partition column (Soumyakanti Das reviewed by Stamatis Zampetakis, Aman Sinha, Alessandro Solim

2022-11-22 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new eb57ac9a0ae HIVE-21599: Wrong results for partitioned Parquet table 
when files contain partition column (Soumyakanti Das reviewed by Stamatis 
Zampetakis, Aman Sinha, Alessandro Solimando)
eb57ac9a0ae is described below

commit eb57ac9a0aef456f25b559a4ac225ac9ebf40508
Author: Soumyakanti Das 
AuthorDate: Tue Nov 8 16:14:09 2022 -0800

HIVE-21599: Wrong results for partitioned Parquet table when files contain 
partition column (Soumyakanti Das reviewed by Stamatis Zampetakis, Aman Sinha, 
Alessandro Solimando)

Closes #3742
---
 data/files/parquet_partition/pcol=100/00_0 | Bin 0 -> 761 bytes
 data/files/parquet_partition/pcol=200/00_0 | Bin 0 -> 761 bytes
 data/files/parquet_partition/pcol=300/00_0 | Bin 0 -> 761 bytes
 .../org/apache/hadoop/hive/ql/exec/Utilities.java  |  29 ++
 .../apache/hadoop/hive/ql/io/HiveInputFormat.java  |   1 +
 .../ql/io/parquet/ParquetRecordReaderBase.java |  19 ++-
 .../queries/clientpositive/parquet_partition_col.q |  37 +
 .../llap/parquet_partition_col.q.out   |  61 +
 8 files changed, 146 insertions(+), 1 deletion(-)

diff --git a/data/files/parquet_partition/pcol=100/00_0 
b/data/files/parquet_partition/pcol=100/00_0
new file mode 100644
index 000..fe3dc6a5288
Binary files /dev/null and b/data/files/parquet_partition/pcol=100/00_0 
differ
diff --git a/data/files/parquet_partition/pcol=200/00_0 
b/data/files/parquet_partition/pcol=200/00_0
new file mode 100644
index 000..4f9e6cf017c
Binary files /dev/null and b/data/files/parquet_partition/pcol=200/00_0 
differ
diff --git a/data/files/parquet_partition/pcol=300/00_0 
b/data/files/parquet_partition/pcol=300/00_0
new file mode 100644
index 000..a16616e8d3a
Binary files /dev/null and b/data/files/parquet_partition/pcol=300/00_0 
differ
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
index b2c3fbbda1f..c205f2c974f 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
@@ -161,6 +161,7 @@ import org.apache.hadoop.hive.ql.plan.PartitionDesc;
 import org.apache.hadoop.hive.ql.plan.PlanUtils;
 import org.apache.hadoop.hive.ql.plan.ReduceWork;
 import org.apache.hadoop.hive.ql.plan.TableDesc;
+import org.apache.hadoop.hive.ql.plan.TableScanDesc;
 import org.apache.hadoop.hive.ql.secrets.URISecretSource;
 import org.apache.hadoop.hive.ql.session.SessionState;
 import org.apache.hadoop.hive.ql.stats.StatsFactory;
@@ -4273,6 +4274,34 @@ public final class Utilities {
 }
   }
 
+  /**
+   * Sets partition column names to the configuration, if there is available 
info in the operator.
+   */
+  public static void setPartitionColumnNames(Configuration conf, 
TableScanOperator tableScanOp) {
+TableScanDesc scanDesc = tableScanOp.getConf();
+Table metadata = scanDesc.getTableMetadata();
+if (metadata == null) {
+  return;
+}
+List partCols = metadata.getPartCols();
+if (partCols != null && !partCols.isEmpty()) {
+  conf.set(serdeConstants.LIST_PARTITION_COLUMNS, 
MetaStoreUtils.getColumnNamesFromFieldSchema(partCols));
+}
+  }
+
+  /**
+   * Returns a list with partition column names present in the configuration,
+   * or empty if there is no such information available.
+   */
+  public static List getPartitionColumnNames(Configuration conf) {
+String colNames = conf.get(serdeConstants.LIST_PARTITION_COLUMNS);
+if (colNames != null) {
+  return splitColNames(new ArrayList<>(), colNames);
+} else {
+  return Collections.emptyList();
+}
+  }
+
   /**
* Create row key and value object inspectors for reduce vectorization.
* The row object inspector used by ReduceWork needs to be a **standard**
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
index 6f38d680a86..de93573e303 100755
--- a/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
@@ -918,6 +918,7 @@ public class HiveInputFormat
 }
 
 Utilities.addTableSchemaToConf(jobConf, tableScan);
+Utilities.setPartitionColumnNames(jobConf, tableScan);
 
 // construct column name list and types for reference by filter push down
 Utilities.setColumnNameList(jobConf, tableScan);
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java
index a665c258

[hive] branch master updated: HIVE-26631: Remove unused Thrift config parameters login.timeout and exponential.backoff.slot.length (xiuzhu9527 reviewed by Stamatis Zampetakis)

2022-11-16 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ae0cabffeaf HIVE-26631: Remove unused Thrift config parameters 
login.timeout and exponential.backoff.slot.length (xiuzhu9527 reviewed by 
Stamatis Zampetakis)
ae0cabffeaf is described below

commit ae0cabffeaf284a6d2ec13a6993c87770818fbb9
Author: xiuzhu9527 <1406823...@qq.com>
AuthorDate: Fri Oct 14 13:31:51 2022 +0800

HIVE-26631: Remove unused Thrift config parameters login.timeout and 
exponential.backoff.slot.length (xiuzhu9527 reviewed by Stamatis Zampetakis)

Closes #3672
---
 common/src/java/org/apache/hadoop/hive/conf/HiveConf.java  | 7 ---
 .../org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java | 4 
 2 files changed, 11 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 12688c3f0fe..27fdddc47c9 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -4092,13 +4092,6 @@ public class HiveConf extends Configuration {
 "Minimum number of Thrift worker threads"),
 
HIVE_SERVER2_THRIFT_MAX_WORKER_THREADS("hive.server2.thrift.max.worker.threads",
 500,
 "Maximum number of Thrift worker threads"),
-HIVE_SERVER2_THRIFT_LOGIN_BEBACKOFF_SLOT_LENGTH(
-"hive.server2.thrift.exponential.backoff.slot.length", "100ms",
-new TimeValidator(TimeUnit.MILLISECONDS),
-"Binary exponential backoff slot time for Thrift clients during login 
to HiveServer2,\n" +
-"for retries until hitting Thrift client timeout"),
-HIVE_SERVER2_THRIFT_LOGIN_TIMEOUT("hive.server2.thrift.login.timeout", 
"20s",
-new TimeValidator(TimeUnit.SECONDS), "Timeout for Thrift clients 
during login to HiveServer2"),
 
HIVE_SERVER2_THRIFT_WORKER_KEEPALIVE_TIME("hive.server2.thrift.worker.keepalive.time",
 "60s",
 new TimeValidator(TimeUnit.SECONDS),
 "Keepalive time (in seconds) for an idle worker thread. When the 
number of workers exceeds min workers, " +
diff --git 
a/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java
 
b/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java
index ec339da985f..8fc728573db 100644
--- 
a/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java
+++ 
b/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java
@@ -100,10 +100,6 @@ public class ThriftBinaryCLIService extends 
ThriftCLIService {
 
   // Server args
   int maxMessageSize = 
hiveConf.getIntVar(HiveConf.ConfVars.HIVE_SERVER2_THRIFT_MAX_MESSAGE_SIZE);
-  int requestTimeout = (int) 
hiveConf.getTimeVar(HiveConf.ConfVars.HIVE_SERVER2_THRIFT_LOGIN_TIMEOUT,
-  TimeUnit.SECONDS);
-  int beBackoffSlotLength = (int) hiveConf
-  
.getTimeVar(HiveConf.ConfVars.HIVE_SERVER2_THRIFT_LOGIN_BEBACKOFF_SLOT_LENGTH, 
TimeUnit.MILLISECONDS);
   TThreadPoolServer.Args sargs = new 
TThreadPoolServer.Args(serverSocket).processorFactory(processorFactory)
   .transportFactory(transportFactory).protocolFactory(new 
TBinaryProtocol.Factory())
   .inputProtocolFactory(new TBinaryProtocol.Factory(true, true, 
maxMessageSize, maxMessageSize))



[calcite] branch main updated: Site: Add instructions to consult/update the JIRA release dashboard

2022-11-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new 6a9634ae3d Site: Add instructions to consult/update the JIRA release 
dashboard
6a9634ae3d is described below

commit 6a9634ae3dd089d9c3d6daf1c36ab832b07b0230
Author: Stamatis Zampetakis 
AuthorDate: Tue Nov 8 22:36:13 2022 +0100

Site: Add instructions to consult/update the JIRA release dashboard

Close apache/calcite#2963
---
 site/_docs/howto.md | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/site/_docs/howto.md b/site/_docs/howto.md
index eaee41f45b..2b2a0a950e 100644
--- a/site/_docs/howto.md
+++ b/site/_docs/howto.md
@@ -713,6 +713,9 @@ Note: release artifacts (dist.apache.org and 
repository.apache.org) are managed
 
 Before you start:
 
+* Consult the [release 
dashboard](https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12333950)
 to get a
+ quick overview about the state of the release and take appropriate actions in 
order to resolve pending tickets or
+ move them to another release/backlog.
 * Send an email to [d...@calcite.apache.org](mailto:d...@calcite.apache.org) 
notifying that RC build process
   is starting and therefore `main` branch is in code freeze until further 
notice.
 * Set up signing keys as described above.
@@ -981,7 +984,9 @@ with a change comment
 (fill in release number and date appropriately).
 Uncheck "Send mail for this update". Under the [releases 
tab](https://issues.apache.org/jira/projects/CALCITE?selectedItem=com.atlassian.jira.jira-projects-plugin%3Arelease-page=released-unreleased)
 of the Calcite project mark the release X.Y.Z as released. If it does not 
already exist create also
-a new version (e.g., X.Y+1.Z) for the next release.
+a new version (e.g., X.Y+1.Z) for the next release. In order to make the 
[release 
dashboard](https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12333950)
+reflect state of the next release, change the fixVersion in the [JIRA filter 
powering the dashboard](https://issues.apache.org/jira/issues/?filter=12346388)
+and save the changes.
 
 After 24 hours, announce the release by sending an email to
 [annou...@apache.org](https://mail-archives.apache.org/mod_mbox/www-announce/) 
using an `@apache.org`



[hive-site] branch main updated: HIVE-26690: Redirect hive-site notifications to the appropriate mailing lists

2022-11-02 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/hive-site.git


The following commit(s) were added to refs/heads/main by this push:
 new 4e83639  HIVE-26690: Redirect hive-site notifications to the 
appropriate mailing lists
4e83639 is described below

commit 4e836390e842792a2b72343235be0dfc1a49eb62
Author: Stamatis Zampetakis 
AuthorDate: Wed Nov 2 14:35:11 2022 +0100

HIVE-26690: Redirect hive-site notifications to the appropriate mailing 
lists
---
 .asf.yaml | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/.asf.yaml b/.asf.yaml
index e94e8f2..b540894 100644
--- a/.asf.yaml
+++ b/.asf.yaml
@@ -4,3 +4,9 @@ publish:
 jekyll:
   whoami: main
   target: asf-site
+
+notifications:
+  commits:  commits@hive.apache.org
+  issues:   git...@hive.apache.org
+  pullrequests: git...@hive.apache.org
+  jira_options: link label worklog



[calcite] branch main updated: [CALCITE-5314] Prune empty parts of a query by exploiting stats/metadata

2022-10-26 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new c945b7f49b [CALCITE-5314] Prune empty parts of a query by exploiting 
stats/metadata
c945b7f49b is described below

commit c945b7f49b99538748c871557f6ac80957be2b6e
Author: Hanumath Maduri 
AuthorDate: Sun Oct 9 08:56:00 2022 -0700

[CALCITE-5314] Prune empty parts of a query by exploiting stats/metadata

Close apache/calcite#2935
---
 .../java/org/apache/calcite/plan/RelOptRules.java  |  1 +
 .../apache/calcite/rel/rules/PruneEmptyRules.java  | 45 +++
 .../apache/calcite/sql/test/SqlAdvisorTest.java|  1 +
 .../org/apache/calcite/test/RelOptRulesTest.java   | 26 +
 .../org/apache/calcite/test/RelOptRulesTest.xml| 44 +++
 .../calcite/test/catalog/MockCatalogReader.java| 65 +++---
 .../test/catalog/MockCatalogReaderSimple.java  |  8 +++
 7 files changed, 183 insertions(+), 7 deletions(-)

diff --git a/core/src/main/java/org/apache/calcite/plan/RelOptRules.java 
b/core/src/main/java/org/apache/calcite/plan/RelOptRules.java
index 760cd0649e..a875ce85d5 100644
--- a/core/src/main/java/org/apache/calcite/plan/RelOptRules.java
+++ b/core/src/main/java/org/apache/calcite/plan/RelOptRules.java
@@ -103,6 +103,7 @@ public class RelOptRules {
   PruneEmptyRules.JOIN_LEFT_INSTANCE,
   PruneEmptyRules.JOIN_RIGHT_INSTANCE,
   PruneEmptyRules.SORT_FETCH_ZERO_INSTANCE,
+  PruneEmptyRules.EMPTY_TABLE_INSTANCE,
   CoreRules.UNION_MERGE,
   CoreRules.INTERSECT_MERGE,
   CoreRules.MINUS_MERGE,
diff --git 
a/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java 
b/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java
index a5977f5ae6..34780e1cf3 100644
--- a/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java
+++ b/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java
@@ -33,6 +33,7 @@ import org.apache.calcite.rel.core.JoinRelType;
 import org.apache.calcite.rel.core.Minus;
 import org.apache.calcite.rel.core.Project;
 import org.apache.calcite.rel.core.Sort;
+import org.apache.calcite.rel.core.TableScan;
 import org.apache.calcite.rel.core.Union;
 import org.apache.calcite.rel.core.Values;
 import org.apache.calcite.rel.logical.LogicalValues;
@@ -165,6 +166,19 @@ public abstract class PruneEmptyRules {
 return false;
   }
 
+  /**
+   * Rule that converts a {@link org.apache.calcite.rel.core.TableScan}
+   * to empty if the table has no rows in it.
+   *
+   * The rule exploits the {@link 
org.apache.calcite.rel.metadata.RelMdMaxRowCount} to derive if
+   * the table is empty or not.
+   */
+  public static final RelOptRule EMPTY_TABLE_INSTANCE =
+  ImmutableZeroMaxRowsRuleConfig.of()
+  .withOperandSupplier(b0 -> b0.operand(TableScan.class).noInputs())
+  .withDescription("PruneZeroRowsTable")
+  .toRule();
+
   /**
* Rule that converts a {@link org.apache.calcite.rel.core.Project}
* to empty if its child is empty.
@@ -540,4 +554,35 @@ public abstract class PruneEmptyRules {
   };
 }
   }
+
+  /** Configuration for rule that transforms an empty relational expression 
into an empty values.
+   *
+   * It relies on {@link org.apache.calcite.rel.metadata.RelMdMaxRowCount} to 
derive if the relation
+   * is empty or not. If the stats are not available then the rule is a noop. 
*/
+  @Value.Immutable
+  public interface ZeroMaxRowsRuleConfig extends PruneEmptyRule.Config {
+
+@Override default PruneEmptyRule toRule() {
+  return new PruneEmptyRule(this) {
+@Override public boolean matches(RelOptRuleCall call) {
+  RelNode node = call.rel(0);
+  Double maxRowCount = call.getMetadataQuery().getMaxRowCount(node);
+  return maxRowCount != null && maxRowCount == 0.0;
+}
+
+@Override public void onMatch(RelOptRuleCall call) {
+  RelNode node = call.rel(0);
+  RelNode emptyValues = call.builder().push(node).empty().build();
+  RelTraitSet traits = node.getTraitSet();
+  // propagate all traits (except convention) from the original 
tableScan
+  // into the empty values
+  if (emptyValues.getConvention() != null) {
+traits = traits.replace(emptyValues.getConvention());
+  }
+  emptyValues = emptyValues.copy(traits, Collections.emptyList());
+  call.transformTo(emptyValues);
+}
+  };
+}
+  }
 }
diff --git a/core/src/test/java/org/apache/calcite/sql/test/SqlAdvisorTest.java 
b/core/src/test/java/org/apache/calcite/sql/test/SqlAdvisorTest.java
index 54032f3db1..9f47e6b28c 100644
--- a/core/src/test/java/org/apache/calcite/sql/test/SqlAdvisorTest.java
+++ b/core/src/test/java/org/apache/calcite/sql/test/SqlAdvi

[calcite-avatica] branch main updated: [CALCITE-3078] Move public lastDay method from Calcite to Avatica

2022-10-23 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite-avatica.git


The following commit(s) were added to refs/heads/main by this push:
 new 0ea5d4f40 [CALCITE-3078] Move public lastDay method from Calcite to 
Avatica
0ea5d4f40 is described below

commit 0ea5d4f400afc15141076805afdc4a81d0375fc7
Author: Stamatis Zampetakis 
AuthorDate: Sun Oct 23 18:51:17 2022 +0200

[CALCITE-3078] Move public lastDay method from Calcite to Avatica

Close apache/calcite-avatica#185
---
 .../apache/calcite/avatica/util/DateTimeUtils.java | 13 
 .../apache/calcite/avatica/util/LastDayTest.java   | 77 ++
 2 files changed, 90 insertions(+)

diff --git 
a/core/src/main/java/org/apache/calcite/avatica/util/DateTimeUtils.java 
b/core/src/main/java/org/apache/calcite/avatica/util/DateTimeUtils.java
index 5995d22dc..a4fdb7f0a 100644
--- a/core/src/main/java/org/apache/calcite/avatica/util/DateTimeUtils.java
+++ b/core/src/main/java/org/apache/calcite/avatica/util/DateTimeUtils.java
@@ -1045,6 +1045,19 @@ public class DateTimeUtils {
 return DateTimeUtils.ymdToUnixDate(y0, m0, d0);
   }
 
+  /**
+   * SQL {@code LAST_DAY} function.
+   *
+   * @param date days since epoch
+   * @return days of the last day of the month since epoch
+   */
+  public static int lastDay(int date) {
+int y0 = (int) DateTimeUtils.unixDateExtract(TimeUnitRange.YEAR, date);
+int m0 = (int) DateTimeUtils.unixDateExtract(TimeUnitRange.MONTH, date);
+int last = lastDay(y0, m0);
+return DateTimeUtils.ymdToUnixDate(y0, m0, last);
+  }
+
   private static int lastDay(int y, int m) {
 switch (m) {
 case 2:
diff --git 
a/core/src/test/java/org/apache/calcite/avatica/util/LastDayTest.java 
b/core/src/test/java/org/apache/calcite/avatica/util/LastDayTest.java
new file mode 100644
index 0..0ce6a009b
--- /dev/null
+++ b/core/src/test/java/org/apache/calcite/avatica/util/LastDayTest.java
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.calcite.avatica.util;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.util.Arrays;
+import java.util.Collection;
+
+import static 
org.apache.calcite.avatica.util.DateTimeUtils.dateStringToUnixDate;
+import static org.apache.calcite.avatica.util.DateTimeUtils.lastDay;
+import static org.apache.calcite.avatica.util.DateTimeUtils.unixDateToString;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Tests for {@code lastDay} methods in {@link DateTimeUtils}.
+ */
+@RunWith(Parameterized.class)
+public class LastDayTest {
+
+  @Parameterized.Parameters(name = "{0}")
+  public static Collection data() {
+return Arrays.asList(new Object[][]{
+{"2019-02-10", "2019-02-28"},
+{"2019-06-10", "2019-06-30"},
+{"2019-07-10", "2019-07-31"},
+{"2019-09-10", "2019-09-30"},
+{"2019-12-10", "2019-12-31"},
+{"-12-10", "-12-31"},
+{"1900-01-01", "1900-01-31"},
+{"1935-02-01", "1935-02-28"},
+{"1965-09-01", "1965-09-30"},
+{"1970-01-01", "1970-01-31"},
+{"2019-02-28", "2019-02-28"},
+{"2019-12-31", "2019-12-31"},
+{"2019-01-01", "2019-01-31"},
+{"2019-06-30", "2019-06-30"},
+{"2020-02-20", "2020-02-29"},
+{"2020-02-29", "2020-02-29"},
+{"-12-31", "-12-31"}
+});
+  }
+
+
+  private final String inputDate;
+  private final String expectedDay;
+
+  public LastDayTest(String inputDate, String expectedDay) {
+this.inputDate = inputDate;
+this.expectedDay = expectedDay;
+  }
+
+  @Test
+  public void testLastDayFromDateReturnsExpectedDay() {
+int lastDayFromDate = lastDay(dateStringToUnixDate(inputDate));
+assertEquals(expectedDay, unixDateToString(lastDayFromDate));
+  }
+
+}
+// End LastDayTest.java



[hive] branch master updated: HIVE-26612: INT64 Parquet timestamps cannot be read into BIGINT Hive type (Steve Carlin reviewed by Stamatis Zampetakis)

2022-10-21 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new f3e3c91b882 HIVE-26612: INT64 Parquet timestamps cannot be read into 
BIGINT Hive type (Steve Carlin reviewed by Stamatis Zampetakis)
f3e3c91b882 is described below

commit f3e3c91b882c442ffd872de998f8db40f7bff162
Author: Steve Carlin 
AuthorDate: Fri Oct 7 16:06:33 2022 -0700

HIVE-26612: INT64 Parquet timestamps cannot be read into BIGINT Hive type 
(Steve Carlin reviewed by Stamatis Zampetakis)

Although HIVE-23345, claims to have fixed the exact same problem that's
not true. HIVE-23345, was sufficient to fix the conversion of INT96
Parquet timestamp to BIGINT but not for INT64.

The code in EINT64_TIMESTAMP_CONVERTER (handles INT64 timestamp) for
converting to BIGINT (convert(Binary)) is never called in the
production code path.

The EINT64_TIMESTAMP_CONVERTER always reads data from a primitive INT64
type so we should always use PrimitiveConverter#addLong method.

There were some tests in TestETypesConverterTest artificially hitting
the convert(binary) code path but these are wrong since Parquet writers
(inside or outside HIVE) never write INT64 timestamps as binaries;
they read/write longs. The tests were renamed to better reflect their
purpose and those targetting the INT64 timestamp type were modified to
use long objects.

Closes #3651
---
 data/files/hive_26612.parquet  | Bin 0 -> 769 bytes
 .../hive/ql/io/parquet/convert/ETypeConverter.java |  23 ++
 .../ql/io/parquet/convert/TestETypeConverter.java  |  10 +++-
 .../parquet_int64_timestamp_to_bigint.q|  22 +
 .../llap/parquet_int64_timestamp_to_bigint.q.out   |  26 +
 5 files changed, 64 insertions(+), 17 deletions(-)

diff --git a/data/files/hive_26612.parquet b/data/files/hive_26612.parquet
new file mode 100644
index 000..9a42d906807
Binary files /dev/null and b/data/files/hive_26612.parquet differ
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java
index 28207714e3c..4c3ab70958e 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java
@@ -749,18 +749,21 @@ public enum ETypeConverter {
 TypeInfo hiveTypeInfo) {
   if (hiveTypeInfo != null) {
 String typeName = 
TypeInfoUtils.getBaseName(hiveTypeInfo.getTypeName());
+final long min = getMinValue(type, typeName, Long.MIN_VALUE);
+final long max = getMaxValue(typeName, Long.MAX_VALUE);
+
 switch (typeName) {
-  case serdeConstants.BIGINT_TYPE_NAME:
-return new BinaryConverter(type, parent, index) {
-  @Override
-  protected LongWritable convert(Binary binary) {
-Preconditions.checkArgument(binary.length() == 8, "Must be 8 
bytes");
-ByteBuffer buf = binary.toByteBuffer();
-buf.order(ByteOrder.LITTLE_ENDIAN);
-long longVal = buf.getLong();
-return new LongWritable(longVal);
+case serdeConstants.BIGINT_TYPE_NAME:
+  return new PrimitiveConverter() {
+@Override
+public void addLong(long value) {
+  if ((value >= min) && (value <= max)) {
+parent.set(index, new LongWritable(value));
+  } else {
+parent.set(index, null);
   }
-};
+}
+  };
 }
   }
   return new PrimitiveConverter() {
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
index fcfb5c7782c..cf6444c9c04 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
@@ -116,23 +116,19 @@ public class TestETypeConverter {
   }
 
   @Test
-  public void testGetSmallBigIntConverter() {
+  public void testGetInt64TimestampConverterBigIntHiveType() {
 Timestamp timestamp = Timestamp.valueOf("1998-10-03 09:58:31.231");
 long msTime = timestamp.toEpochMilli();
-ByteBuffer buf = ByteBuffer.allocate(12);
-buf.order(ByteOrder.LITTLE_ENDIAN);
-buf.putLong(msTime);
-buf.flip();
 // Need TimeStamp logicalType annotation here
 PrimitiveType primitiveType = createInt64TimestampType(false, 
TimeUnit.MILLIS);
-Writable writable = 
getWritableFromBinaryConverter(create

[hive] 02/03: HIVE-26642: Replace HiveFilterMergeRule with Calcite's built-in implementation (Stamatis Zampetakis reviewed by Krisztian Kasa)

2022-10-19 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 4ae2dae62398f319f1daa0cea77edd88b6c8f004
Author: Stamatis Zampetakis 
AuthorDate: Mon Oct 17 15:24:34 2022 +0200

HIVE-26642: Replace HiveFilterMergeRule with Calcite's built-in 
implementation (Stamatis Zampetakis reviewed by Krisztian Kasa)

Closes #3678
---
 .../hadoop/hive/ql/optimizer/calcite/Bug.java  |  6 ---
 .../calcite/rules/HiveFilterMergeRule.java | 59 --
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  8 ++-
 3 files changed, 6 insertions(+), 67 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/Bug.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/Bug.java
index dc984861e3f..32f8cff74b9 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/Bug.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/Bug.java
@@ -34,12 +34,6 @@ public final class Bug {
* Whether https://issues.apache.org/jira/browse/CALCITE-1851;>CALCITE-1851 is 
fixed.
*/
   public static final boolean CALCITE_1851_FIXED = false;
-  
-  /**
-   * Whether https://issues.apache.org/jira/browse/CALCITE-3982;>issue
-   * CALCITE-3982 is fixed.
-   */
-  public static final boolean CALCITE_3982_FIXED = false;
 
   /**
* Whether https://issues.apache.org/jira/browse/CALCITE-4166;>issue
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveFilterMergeRule.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveFilterMergeRule.java
deleted file mode 100644
index 4f820af7563..000
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveFilterMergeRule.java
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hive.ql.optimizer.calcite.rules;
-
-import org.apache.calcite.plan.RelOptRule;
-import org.apache.calcite.plan.RelOptRuleCall;
-import org.apache.calcite.tools.RelBuilder;
-import org.apache.hadoop.hive.ql.optimizer.calcite.Bug;
-import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
-import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveFilter;
-
-/**
- * Mostly a copy of {@link org.apache.calcite.rel.rules.FilterMergeRule}.
- * However, it relies in relBuilder to create the new condition and thus
- * simplifies/flattens the predicate before creating the new filter.
- */
-public class HiveFilterMergeRule extends RelOptRule {
-
-  public static final HiveFilterMergeRule INSTANCE =
-  new HiveFilterMergeRule();
-
-  /** Private constructor. */
-  private HiveFilterMergeRule() {
-super(operand(HiveFilter.class,
-operand(HiveFilter.class, any())),
-HiveRelFactories.HIVE_BUILDER, null);
-if (Bug.CALCITE_3982_FIXED) {
-  throw new AssertionError("Remove logic in HiveFilterMergeRule when 
[CALCITE-3982] "
-  + "has been fixed and use directly Calcite's FilterMergeRule 
instead.");
-}
-  }
-
-  //~ Methods 
-
-  public void onMatch(RelOptRuleCall call) {
-final HiveFilter topFilter = call.rel(0);
-final HiveFilter bottomFilter = call.rel(1);
-
-final RelBuilder relBuilder = call.builder();
-relBuilder.push(bottomFilter.getInput())
-.filter(bottomFilter.getCondition(), topFilter.getCondition());
-
-call.transformTo(relBuilder.build());
-  }
-}
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
index 0683586e86c..5a6d256cb20 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
@@ -90,6 +90,8 @@ import 
org.apache.calcite.rel.metadata.ChainedRelMetadataProvider;
 import org.apache.calcite.rel.metadata.JaninoRelMetadataProvider;
 import org.apache.calcite.rel.metadata.RelMetadataProvider;
 import org.apache.calcite.rel.metadata.RelMetadataQuery;
+impor

[hive] 03/03: HIVE-26643: HiveUnionPullUpConstantsRule produces an invalid plan when pulling up constants for nullable fields (Alessandro Solimando reviewed by Stamatis Zampetakis)

2022-10-19 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 6b05d64ce8c7161415d97a7896ea50025322e30a
Author: Alessandro Solimando 
AuthorDate: Mon Oct 17 17:12:18 2022 +0200

HIVE-26643: HiveUnionPullUpConstantsRule produces an invalid plan when 
pulling up constants for nullable fields (Alessandro Solimando reviewed by 
Stamatis Zampetakis)

Closes #3680
---
 .../hadoop/hive/ql/optimizer/calcite/Bug.java  |   5 +
 .../rules/HiveUnionPullUpConstantsRule.java|  27 ++--
 .../rules/TestHiveUnionPullUpConstantsRule.java| 178 +
 3 files changed, 199 insertions(+), 11 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/Bug.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/Bug.java
index 32f8cff74b9..91877060ad0 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/Bug.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/Bug.java
@@ -78,4 +78,9 @@ public final class Bug {
* Whether https://issues.apache.org/jira/browse/CALCITE-5294;>CALCITE-5294 is 
fixed.
*/
   public static final boolean CALCITE_5294_FIXED = false;
+
+  /**
+   * Whether https://issues.apache.org/jira/browse/CALCITE-5337;>CALCITE-5337 is 
fixed.
+   */
+  public static final boolean CALCITE_5337_FIXED = false;
 }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
index 10d8c1362e7..dcb18072d06 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
@@ -38,6 +38,7 @@ import org.apache.calcite.tools.RelBuilderFactory;
 import org.apache.calcite.util.ImmutableBitSet;
 import org.apache.calcite.util.Pair;
 import org.apache.calcite.util.mapping.Mappings;
+import org.apache.hadoop.hive.ql.optimizer.calcite.Bug;
 import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveUnion;
 import org.slf4j.Logger;
@@ -66,6 +67,10 @@ public class HiveUnionPullUpConstantsRule extends RelOptRule 
{
 
   @Override
   public void onMatch(RelOptRuleCall call) {
+if (Bug.CALCITE_5337_FIXED) {
+  throw new IllegalStateException("Class redundant when the fix for 
CALCITE-5337 is merged into Calcite");
+}
+
 final Union union = call.rel(0);
 
 final int count = union.getRowType().getFieldCount();
@@ -82,13 +87,11 @@ public class HiveUnionPullUpConstantsRule extends 
RelOptRule {
   return;
 }
 
-Map conditionsExtracted =
-RexUtil.predicateConstants(RexNode.class, rexBuilder, 
predicates.pulledUpPredicates);
 Map constants = new HashMap<>();
 for (int i = 0; i < count ; i++) {
   RexNode expr = rexBuilder.makeInputRef(union, i);
-  if (conditionsExtracted.containsKey(expr)) {
-constants.put(expr, conditionsExtracted.get(expr));
+  if (predicates.constantMap.containsKey(expr)) {
+constants.put(expr, predicates.constantMap.get(expr));
   }
 }
 
@@ -107,7 +110,11 @@ public class HiveUnionPullUpConstantsRule extends 
RelOptRule {
   RexNode expr = rexBuilder.makeInputRef(union, i);
   RelDataTypeField field = fields.get(i);
   if (constants.containsKey(expr)) {
-topChildExprs.add(constants.get(expr));
+if (constants.get(expr).getType().equals(field.getType())) {
+  topChildExprs.add(constants.get(expr));
+} else {
+  topChildExprs.add(rexBuilder.makeCast(field.getType(), 
constants.get(expr), true));
+}
 topChildExprsFields.add(field.getName());
   } else {
 topChildExprs.add(expr);
@@ -128,16 +135,14 @@ public class HiveUnionPullUpConstantsRule extends 
RelOptRule {
 for (int i = 0; i < union.getInputs().size() ; i++) {
   RelNode input = union.getInput(i);
   List> newChildExprs = new ArrayList<>();
-  for (int j = 0; j < refsIndex.cardinality(); j++ ) {
+  for (int j = 0; j < refsIndex.cardinality(); j++) {
 int pos = refsIndex.nth(j);
-newChildExprs.add(
-Pair.of(rexBuilder.makeInputRef(input, pos),
-input.getRowType().getFieldList().get(pos).getName()));
+newChildExprs.add(Pair.of(rexBuilder.makeInputRef(input, pos),
+input.getRowType().getFieldList().get(pos).getName()));
   }
   if (newChildExprs.isEmpty()) {
 // At least a single item in project is required.
-newChildExprs.add(Pair.of(
-topChildExprs.get(0), topChildExprsFields.get(0)));
+newChildExprs.add(Pair.of(topCh

[hive] branch master updated (718df0a7e4f -> 6b05d64ce8c)

2022-10-19 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from 718df0a7e4f HIVE-26639: ConstantVectorExpression shouldn't rely on 
default charset (#3675) (Laszlo Bodor reviewed by Ayush Saxena)
 new 5a2b42982ad HIVE-26638: Replace in-house CBO reduce expressions rules 
with Calcite's built-in classes (Stamatis Zampetakis reviewed by Krisztian Kasa)
 new 4ae2dae6239 HIVE-26642: Replace HiveFilterMergeRule with Calcite's 
built-in implementation (Stamatis Zampetakis reviewed by Krisztian Kasa)
 new 6b05d64ce8c HIVE-26643: HiveUnionPullUpConstantsRule produces an 
invalid plan when pulling up constants for nullable fields (Alessandro 
Solimando reviewed by Stamatis Zampetakis)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop/hive/ql/optimizer/calcite/Bug.java  |  11 +-
 .../calcite/rules/HiveFilterMergeRule.java |  59 --
 .../calcite/rules/HiveReduceExpressionsRule.java   | 220 +++--
 .../rules/HiveUnionPullUpConstantsRule.java|  27 +--
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |   8 +-
 .../rules/TestHiveUnionPullUpConstantsRule.java| 178 +
 .../clientpositive/llap/acid_nullscan.q.out|   4 +-
 7 files changed, 239 insertions(+), 268 deletions(-)
 delete mode 100644 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveFilterMergeRule.java
 create mode 100644 
ql/src/test/org/apache/hadoop/hive/ql/optimizer/calcite/rules/TestHiveUnionPullUpConstantsRule.java



[hive] 01/03: HIVE-26638: Replace in-house CBO reduce expressions rules with Calcite's built-in classes (Stamatis Zampetakis reviewed by Krisztian Kasa)

2022-10-19 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 5a2b42982adeca506daf5bec435dfc51b4522638
Author: Stamatis Zampetakis 
AuthorDate: Wed Oct 12 14:57:06 2022 +0200

HIVE-26638: Replace in-house CBO reduce expressions rules with Calcite's 
built-in classes (Stamatis Zampetakis reviewed by Krisztian Kasa)

Closes #3666
---
 .../calcite/rules/HiveReduceExpressionsRule.java   | 220 +++--
 .../clientpositive/llap/acid_nullscan.q.out|   4 +-
 2 files changed, 34 insertions(+), 190 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveReduceExpressionsRule.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveReduceExpressionsRule.java
index 5545ae46616..0521dc3e32a 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveReduceExpressionsRule.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveReduceExpressionsRule.java
@@ -16,31 +16,13 @@
  */
 package org.apache.hadoop.hive.ql.optimizer.calcite.rules;
 
-import java.util.List;
-
-import org.apache.calcite.plan.RelOptPredicateList;
-import org.apache.calcite.plan.RelOptRuleCall;
-import org.apache.calcite.rel.RelNode;
-import org.apache.calcite.rel.core.Filter;
-import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.plan.RelOptRule;
 import org.apache.calcite.rel.rules.ReduceExpressionsRule;
-import org.apache.calcite.rex.RexCall;
-import org.apache.calcite.rex.RexInputRef;
-import org.apache.calcite.rex.RexNode;
-import org.apache.calcite.rex.RexUtil;
-import org.apache.calcite.sql.SqlKind;
-import org.apache.calcite.sql.type.SqlTypeName;
-import org.apache.calcite.tools.RelBuilder;
-import org.apache.calcite.tools.RelBuilderFactory;
 import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveFilter;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveJoin;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveSemiJoin;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import com.google.common.collect.Lists;
 
 /**
  * Collection of planner rules that apply various simplifying transformations 
on
@@ -53,196 +35,58 @@ import com.google.common.collect.Lists;
  * is the same as the type of the resulting cast expression
  * 
  */
-public abstract class HiveReduceExpressionsRule extends ReduceExpressionsRule {
-
-  protected static final Logger LOG = 
LoggerFactory.getLogger(HiveReduceExpressionsRule.class);
+public final class HiveReduceExpressionsRule {
 
-  //~ Static fields/initializers -
+  private HiveReduceExpressionsRule() {
+throw new IllegalStateException("Instantiation not allowed");
+  }
 
   /**
* Singleton rule that reduces constants inside a
* {@link 
org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveFilter}.
*/
-  public static final ReduceExpressionsRule FILTER_INSTANCE =
-  new FilterReduceExpressionsRule(HiveFilter.class, 
HiveRelFactories.HIVE_BUILDER);
+  public static final RelOptRule FILTER_INSTANCE =
+  ReduceExpressionsRule.FilterReduceExpressionsRule.Config.DEFAULT
+  .withOperandFor(HiveFilter.class)
+  .withMatchNullability(false)
+  .withRelBuilderFactory(HiveRelFactories.HIVE_BUILDER)
+  .as(ReduceExpressionsRule.FilterReduceExpressionsRule.Config.class)
+  .toRule();
 
   /**
* Singleton rule that reduces constants inside a
* {@link 
org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject}.
*/
-  public static final ReduceExpressionsRule PROJECT_INSTANCE =
-  new ProjectReduceExpressionsRule(HiveProject.class, 
HiveRelFactories.HIVE_BUILDER);
+  public static final RelOptRule PROJECT_INSTANCE =
+  ReduceExpressionsRule.ProjectReduceExpressionsRule.Config.DEFAULT
+  .withOperandFor(HiveProject.class)
+  .withRelBuilderFactory(HiveRelFactories.HIVE_BUILDER)
+  .as(ReduceExpressionsRule.ProjectReduceExpressionsRule.Config.class)
+  .toRule();
 
   /**
* Singleton rule that reduces constants inside a
* {@link org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveJoin}.
*/
-  public static final ReduceExpressionsRule JOIN_INSTANCE =
-  new JoinReduceExpressionsRule(HiveJoin.class, false, 
HiveRelFactories.HIVE_BUILDER);
+  public static final RelOptRule JOIN_INSTANCE =
+  ReduceExpressionsRule.JoinReduceExpressionsRule.Config.DEFAULT
+  .withOperandFor(HiveJoin.class)
+  .withMatchNullability(false)
+  .withRelBuilderFactory(HiveRelFactories.HI

[hive] 01/02: HIVE-26626: Cut dependencies between HiveXxPullUpConstantsRule and HiveReduceExpressionsRule (Stamatis Zampetakis reviewed by Krisztian Kasa)

2022-10-17 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 92f60df1829879a4aac3727d00b10b30244a45e4
Author: Stamatis Zampetakis 
AuthorDate: Wed Oct 12 14:50:40 2022 +0200

HIVE-26626: Cut dependencies between HiveXxPullUpConstantsRule and 
HiveReduceExpressionsRule (Stamatis Zampetakis reviewed by Krisztian Kasa)

Closes #3665
---
 .../hive/ql/optimizer/calcite/rules/HiveSortPullUpConstantsRule.java  | 4 ++--
 .../hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveSortPullUpConstantsRule.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveSortPullUpConstantsRule.java
index 5765ddf309e..5cf2eb6a6c6 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveSortPullUpConstantsRule.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveSortPullUpConstantsRule.java
@@ -125,8 +125,8 @@ public final class HiveSortPullUpConstantsRule {
 return;
   }
 
-  Map conditionsExtracted = 
HiveReduceExpressionsRule.predicateConstants(
-  RexNode.class, rexBuilder, predicates);
+  Map conditionsExtracted =
+  RexUtil.predicateConstants(RexNode.class, rexBuilder, 
predicates.pulledUpPredicates);
   Map constants = new HashMap<>();
   for (int i = 0; i < count; i++) {
 RexNode expr = rexBuilder.makeInputRef(sortNode.getInput(), i);
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
index 10d718b9cc0..10d8c1362e7 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
@@ -82,8 +82,8 @@ public class HiveUnionPullUpConstantsRule extends RelOptRule {
   return;
 }
 
-Map conditionsExtracted = 
HiveReduceExpressionsRule.predicateConstants(
-RexNode.class, rexBuilder, predicates);
+Map conditionsExtracted =
+RexUtil.predicateConstants(RexNode.class, rexBuilder, 
predicates.pulledUpPredicates);
 Map constants = new HashMap<>();
 for (int i = 0; i < count ; i++) {
   RexNode expr = rexBuilder.makeInputRef(union, i);



[hive] 02/02: HIVE-26627: Remove HiveRelBuilder.aggregateCall override and refactor callers to use existing public methods (Stamatis Zampetakis reviewed by Krisztian Kasa)

2022-10-17 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit b48c1bf11c4f75ba2c894e4732a96813ddde1414
Author: Stamatis Zampetakis 
AuthorDate: Thu Oct 6 12:35:59 2022 +0200

HIVE-26627: Remove HiveRelBuilder.aggregateCall override and refactor 
callers to use existing public methods (Stamatis Zampetakis reviewed by 
Krisztian Kasa)

Closes #3668
---
 .../hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java   |  8 
 .../calcite/rules/HiveRewriteToDataSketchesRules.java  | 14 ++
 2 files changed, 2 insertions(+), 20 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java
index 04b3cae88fd..8722da515ae 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java
@@ -53,8 +53,6 @@ import 
org.apache.hadoop.hive.ql.optimizer.calcite.functions.HiveSqlSumAggFuncti
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.functions.HiveSqlSumEmptyIsZeroAggFunction;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveFloorDate;
 
-import com.google.common.collect.ImmutableList;
-
 import java.util.ArrayList;
 import java.util.List;
 import java.util.Set;
@@ -241,10 +239,4 @@ public class HiveRelBuilder extends RelBuilder {
 }
   }
 
-  /** Make the method visible */
-  @Override
-  public AggCall aggregateCall(SqlAggFunction aggFunction, boolean distinct, 
boolean approximate, boolean ignoreNulls,
-  RexNode filter, ImmutableList orderKeys, String alias, 
ImmutableList operands) {
-return super.aggregateCall(aggFunction, distinct, approximate, 
ignoreNulls, filter, orderKeys, alias, operands);
-  }
 }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRewriteToDataSketchesRules.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRewriteToDataSketchesRules.java
index 1fceb65ccae..a726f10d810 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRewriteToDataSketchesRules.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRewriteToDataSketchesRules.java
@@ -53,7 +53,6 @@ import org.apache.calcite.sql.type.SqlTypeName;
 import org.apache.calcite.tools.RelBuilder;
 import org.apache.calcite.tools.RelBuilder.AggCall;
 import org.apache.hadoop.hive.ql.exec.DataSketchesFunctions;
-import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelBuilder;
 import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveAggregate;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject;
@@ -540,17 +539,8 @@ public final class HiveRewriteToDataSketchesRules {
 RexNode key = orderKey.getKey();
 key = rexBuilder.makeCast(getFloatType(), key);
 
-// @formatter:off
-AggCall aggCall = ((HiveRelBuilder) relBuilder).aggregateCall(
-(SqlAggFunction) 
getSqlOperator(DataSketchesFunctions.DATA_TO_SKETCH),
-/* distinct */ false,
-/* approximate */ false,
-/* ignoreNulls */ true,
-null,
-ImmutableList.of(),
-null,
-ImmutableList.of(key));
-// @formatter:on
+SqlAggFunction dataToSketchFunction = (SqlAggFunction) 
getSqlOperator(DataSketchesFunctions.DATA_TO_SKETCH);
+AggCall aggCall = relBuilder.aggregateCall(dataToSketchFunction, 
key).ignoreNulls(true);
 
 relBuilder.aggregate(relBuilder.groupKey(partitionKeys), aggCall);
 



[hive] branch master updated (cae2b1e06aa -> b48c1bf11c4)

2022-10-17 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from cae2b1e06aa HIVE-26617: Remove some useless properties (#3658)
 new 92f60df1829 HIVE-26626: Cut dependencies between 
HiveXxPullUpConstantsRule and HiveReduceExpressionsRule (Stamatis Zampetakis 
reviewed by Krisztian Kasa)
 new b48c1bf11c4 HIVE-26627: Remove HiveRelBuilder.aggregateCall override 
and refactor callers to use existing public methods (Stamatis Zampetakis 
reviewed by Krisztian Kasa)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java   |  8 
 .../calcite/rules/HiveRewriteToDataSketchesRules.java  | 14 ++
 .../calcite/rules/HiveSortPullUpConstantsRule.java |  4 ++--
 .../calcite/rules/HiveUnionPullUpConstantsRule.java|  4 ++--
 4 files changed, 6 insertions(+), 24 deletions(-)



[hive] branch master updated: HIVE-26619: Sonar analysis is not run for the master branch (Alessandro Solimando reviewed by Stamatis Zampetakis)

2022-10-12 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 8c3567ea8e4 HIVE-26619: Sonar analysis is not run for the master 
branch (Alessandro Solimando reviewed by Stamatis Zampetakis)
8c3567ea8e4 is described below

commit 8c3567ea8e423b202cde370f4d3fb401bcc23e46
Author: Alessandro Solimando 
AuthorDate: Mon Oct 10 18:23:14 2022 +0200

HIVE-26619: Sonar analysis is not run for the master branch (Alessandro 
Solimando reviewed by Stamatis Zampetakis)

Closes #3655
---
 Jenkinsfile | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index fab48ede662..d9c30510143 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -321,12 +321,12 @@ tar -xzf 
packaging/target/apache-hive-*-nightly-*-src.tar.gz
   }
   branches['sonar'] = {
   executorNode {
-  if(env.CHANGE_BRANCH == 'master') {
+  if(env.BRANCH_NAME == 'master') {
   stage('Prepare') {
   loadWS();
   }
   stage('Sonar') {
-  sonarAnalysis("-Dsonar.branch.name=${CHANGE_BRANCH}")
+  sonarAnalysis("-Dsonar.branch.name=${BRANCH_NAME}")
   }
   } else if(env.CHANGE_ID) {
   stage('Prepare') {
@@ -340,7 +340,7 @@ tar -xzf packaging/target/apache-hive-*-nightly-*-src.tar.gz
-Dsonar.pullrequest.provider=GitHub""")
   }
   } else {
-  echo "Skipping sonar analysis, we only run it on PRs and on the 
master branch"
+  echo "Skipping sonar analysis, we only run it on PRs and on the 
master branch, found ${env.BRANCH_NAME}"
   }
   }
   }



[hive] branch master updated: HIVE-26584: Cleanup dangling testdata for compressed_skip_header_footer_aggr/empty_skip_header_footer_aggr (John Sherman reviewed by Ayush Saxena, Stamatis Zampetakis)

2022-10-07 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 566f48d3d3f HIVE-26584: Cleanup dangling testdata for 
compressed_skip_header_footer_aggr/empty_skip_header_footer_aggr (John Sherman 
reviewed by Ayush Saxena, Stamatis Zampetakis)
566f48d3d3f is described below

commit 566f48d3d3fc740ef958bdf963e511e0853da402
Author: John Sherman 
AuthorDate: Sat Oct 1 09:56:09 2022 -0700

HIVE-26584: Cleanup dangling testdata for 
compressed_skip_header_footer_aggr/empty_skip_header_footer_aggr (John Sherman 
reviewed by Ayush Saxena, Stamatis Zampetakis)

Closes #3636
---
 .../clientpositive/compressed_skip_header_footer_aggr.q  | 12 +---
 .../queries/clientpositive/empty_skip_header_footer_aggr.q   | 12 +---
 .../llap/compressed_skip_header_footer_aggr.q.out|  1 +
 .../clientpositive/llap/empty_skip_header_footer_aggr.q.out  | 11 +--
 .../tez/compressed_skip_header_footer_aggr.q.out |  1 +
 .../clientpositive/tez/empty_skip_header_footer_aggr.q.out   | 11 +--
 6 files changed, 18 insertions(+), 30 deletions(-)

diff --git 
a/ql/src/test/queries/clientpositive/compressed_skip_header_footer_aggr.q 
b/ql/src/test/queries/clientpositive/compressed_skip_header_footer_aggr.q
index 58853bef859..9cca2f47f62 100644
--- a/ql/src/test/queries/clientpositive/compressed_skip_header_footer_aggr.q
+++ b/ql/src/test/queries/clientpositive/compressed_skip_header_footer_aggr.q
@@ -4,8 +4,7 @@ SET hive.explain.user=false;
 
 dfs ${system:test.dfs.mkdir} ${system:test.tmp.dir}/testcase1;
 dfs -copyFromLocal ../../data/files/compressed_4line_file1.csv  
${system:test.tmp.dir}/testcase1/;
---
---
+
 CREATE EXTERNAL TABLE `testcase1`(id int, name string) ROW FORMAT SERDE 
'org.apache.hadoop.hive.serde2.OpenCSVSerde'
   LOCATION '${system:test.tmp.dir}/testcase1'
   TBLPROPERTIES ("skip.header.line.count"="1", "skip.footer.line.count"="1");
@@ -97,4 +96,11 @@ select count(*) from testcase_gz;
 
 set hive.fetch.task.conversion=none;
 select * from testcase_gz;
-select count(*) from testcase_gz;
\ No newline at end of file
+select count(*) from testcase_gz;
+
+-- clean up testdata
+dfs -rmr ${system:test.tmp.dir}/testcase_gz;
+dfs -rmr ${system:test.tmp.dir}/testcase1/;
+dfs -rmr ${system:test.tmp.dir}/testcase2/;
+dfs -rmr ${system:test.tmp.dir}/testcase3/;
+dfs -rmr ${system:test.tmp.dir}/testcase4/;
diff --git a/ql/src/test/queries/clientpositive/empty_skip_header_footer_aggr.q 
b/ql/src/test/queries/clientpositive/empty_skip_header_footer_aggr.q
index 7d09ac48c1a..0406ef0a425 100644
--- a/ql/src/test/queries/clientpositive/empty_skip_header_footer_aggr.q
+++ b/ql/src/test/queries/clientpositive/empty_skip_header_footer_aggr.q
@@ -2,13 +2,10 @@ SET hive.query.results.cache.enabled=false;
 SET hive.mapred.mode=nonstrict;
 SET hive.explain.user=false;
 
-dfs ${system:test.dfs.mkdir} ${system:test.tmp.dir}/testcase1;
-dfs -rmr ${system:test.tmp.dir}/testcase1;
 dfs ${system:test.dfs.mkdir} ${system:test.tmp.dir}/testcase1;
 dfs -copyFromLocal ../../data/files/emptyhead_4line_file1.csv  
${system:test.tmp.dir}/testcase1/;
 --
 --
-DROP TABLE IF EXISTS `testcase1`;
 CREATE EXTERNAL TABLE `testcase1`(id int, name string) ROW FORMAT SERDE 
'org.apache.hadoop.hive.serde2.OpenCSVSerde'
   LOCATION '${system:test.tmp.dir}/testcase1'
   TBLPROPERTIES ("skip.header.line.count"="1");
@@ -25,13 +22,10 @@ set hive.fetch.task.conversion=none;
 select * from testcase1;
 select count(*) from testcase1;
 
-dfs ${system:test.dfs.mkdir} ${system:test.tmp.dir}/testcase2;
-dfs -rmr ${system:test.tmp.dir}/testcase2;
 dfs ${system:test.dfs.mkdir} ${system:test.tmp.dir}/testcase2;
 dfs -copyFromLocal ../../data/files/emptyhead_4line_file1.csv.bz2  
${system:test.tmp.dir}/testcase2/;
 --
 --
-DROP TABLE IF EXISTS `testcase2`;
 CREATE EXTERNAL TABLE `testcase2`(id int, name string) ROW FORMAT SERDE 
'org.apache.hadoop.hive.serde2.OpenCSVSerde'
  LOCATION '${system:test.tmp.dir}/testcase2'
  TBLPROPERTIES ("skip.header.line.count"="1");
@@ -42,4 +36,8 @@ select count(*) from testcase2;
 
 set hive.fetch.task.conversion=none;
 select * from testcase2;
-select count(*) from testcase2;
\ No newline at end of file
+select count(*) from testcase2;
+
+-- clean up external table files
+dfs -rmr ${system:test.tmp.dir}/testcase1/;
+dfs -rmr ${system:test.tmp.dir}/testcase2/;
diff --git 
a/ql/src/test/results/clientpositive/llap/compressed_skip_header_footer_aggr.q.out
 
b/ql/src/test/results/clientpositive/llap/compressed_skip_header_footer_aggr.q.out
index cf27700f2bc..2d9be8f3b25 100644
--- 
a/ql/src/test/results/clientpositive/llap/compressed_skip_header_footer_aggr.q.out
+++ 
b/ql/src/test/results/clientpositive/llap/co

[calcite] branch main updated: [CALCITE-1045][CALCITE-5127] Support correlation variables in project

2022-10-04 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new c2407f59c3 [CALCITE-1045][CALCITE-5127] Support correlation variables 
in project
c2407f59c3 is described below

commit c2407f59c32d1690d16b641d556bb27f8f1783ac
Author: Benchao Li 
AuthorDate: Sun May 22 19:42:24 2022 +0800

[CALCITE-1045][CALCITE-5127] Support correlation variables in project

To some extend correlation in project was already supported even before
this change. However, the fact that the correlation variables were not
explicitly present (and returned by the operator) creates problems
cause we cannot safely deduce if a column/field is used and thus we may
wrongly remove those fields when using the RelFieldTrimmer, when
merging projections, etc.; see queries and discussion under the
respective JIRAs.

The addition of correlation variables in project also aligns the code
with Filter, Join; the latter explicitly set correlation variables.

Co-authored-by: korlov42 

Close apache/calcite#2813
Close apache/calcite#2623
---
 .../adapter/cassandra/CassandraProject.java|   3 +-
 .../calcite/adapter/cassandra/CassandraRules.java  |   3 +-
 .../adapter/enumerable/EnumerableProject.java  |   3 +-
 .../adapter/enumerable/EnumerableProjectRule.java  |   6 ++
 .../adapter/enumerable/EnumerableRelFactories.java |   7 +-
 .../org/apache/calcite/adapter/jdbc/JdbcRules.java |  12 ++-
 .../org/apache/calcite/interpreter/Bindables.java  |   8 +-
 .../java/org/apache/calcite/plan/RelOptUtil.java   |  10 +-
 .../apache/calcite/prepare/LixToRelTranslator.java |   4 +-
 .../calcite/prepare/QueryableRelBuilder.java   |   4 +-
 .../main/java/org/apache/calcite/rel/RelNode.java  |   3 -
 .../main/java/org/apache/calcite/rel/RelRoot.java  |   3 +-
 .../java/org/apache/calcite/rel/core/Project.java  |  39 ++-
 .../org/apache/calcite/rel/core/RelFactories.java  |  27 -
 .../apache/calcite/rel/externalize/RelJson.java|   5 +-
 .../apache/calcite/rel/logical/LogicalProject.java |  63 +--
 .../calcite/rel/rel2sql/RelToSqlConverter.java |   3 +-
 .../rel/rules/FilterProjectTransposeRule.java  |   5 +-
 .../rel/rules/ProjectWindowTransposeRule.java  |   5 +-
 .../org/apache/calcite/rel/stream/StreamRules.java |   3 +-
 .../apache/calcite/sql2rel/SqlToRelConverter.java  |  18 +++-
 .../java/org/apache/calcite/tools/RelBuilder.java  |  68 ++--
 .../org/apache/calcite/plan/RelOptUtilTest.java|   7 +-
 .../org/apache/calcite/plan/RelWriterTest.java |  18 +++-
 .../calcite/plan/volcano/TraitPropagationTest.java |   6 +-
 .../org/apache/calcite/test/RelMetadataTest.java   |   6 +-
 .../org/apache/calcite/test/RelOptRulesTest.java   |   3 +-
 .../apache/calcite/test/SqlToRelConverterTest.java |  24 +
 .../org/apache/calcite/test/RelOptRulesTest.xml|   4 +-
 .../apache/calcite/test/SqlToRelConverterTest.xml  |  70 -
 core/src/test/resources/sql/agg.iq |   9 +-
 core/src/test/resources/sql/sub-query.iq   | 116 +++--
 .../apache/calcite/adapter/druid/DruidRules.java   |   5 +
 .../elasticsearch/ElasticsearchProject.java|   3 +-
 .../adapter/elasticsearch/ElasticsearchRules.java  |   6 ++
 .../calcite/adapter/geode/rel/GeodeProject.java|   3 +-
 .../calcite/adapter/geode/rel/GeodeRules.java  |   7 +-
 .../calcite/adapter/innodb/InnodbProject.java  |   3 +-
 .../apache/calcite/adapter/innodb/InnodbRules.java |   3 +-
 .../calcite/adapter/mongodb/MongoProject.java  |   3 +-
 .../apache/calcite/adapter/mongodb/MongoRules.java |   6 ++
 .../org/apache/calcite/adapter/pig/PigProject.java |   3 +-
 .../org/apache/calcite/adapter/pig/PigRules.java   |   6 ++
 .../calcite/adapter/splunk/SplunkPushDownRule.java |   2 +-
 .../apache/calcite/test/RelMetadataFixture.java|   3 +
 .../calcite/test/catalog/MockCatalogReader.java|   4 +-
 46 files changed, 510 insertions(+), 112 deletions(-)

diff --git 
a/cassandra/src/main/java/org/apache/calcite/adapter/cassandra/CassandraProject.java
 
b/cassandra/src/main/java/org/apache/calcite/adapter/cassandra/CassandraProject.java
index b5a77c74f8..947d6e5cd0 100644
--- 
a/cassandra/src/main/java/org/apache/calcite/adapter/cassandra/CassandraProject.java
+++ 
b/cassandra/src/main/java/org/apache/calcite/adapter/cassandra/CassandraProject.java
@@ -28,6 +28,7 @@ import org.apache.calcite.rex.RexNode;
 import org.apache.calcite.util.Pair;
 
 import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableSet;
 
 import org.checkerframework.checker.nullness.qual.Nullable;
 
@@ -42,7 +43,7 @@ import java.util.Map;
 public class CassandraProject extends Project implements CassandraRel {
   public CassandraProject

[hive] branch master updated: HIVE-26320: Incorrect results for IN UDF on Parquet column of CHAR/VARCHAR type (John Sherman reviewed by Aman Sinha, Krisztian Kasa, Stamatis Zampetakis, Alessandro Soli

2022-10-03 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 70562437d36 HIVE-26320: Incorrect results for IN UDF on Parquet column 
of CHAR/VARCHAR type (John Sherman reviewed by Aman Sinha, Krisztian Kasa, 
Stamatis Zampetakis, Alessandro Solimando, Dayakar)
70562437d36 is described below

commit 70562437d369c2f4ab3e879bae519f81d386da3b
Author: John Sherman 
AuthorDate: Tue Sep 27 09:42:51 2022 -0700

HIVE-26320: Incorrect results for IN UDF on Parquet column of CHAR/VARCHAR 
type (John Sherman reviewed by Aman Sinha, Krisztian Kasa, Stamatis Zampetakis, 
Alessandro Solimando, Dayakar)

Closes #3628
---
 .../hive/ql/io/parquet/convert/ETypeConverter.java |  32 +++-
 .../ql/io/parquet/convert/TestETypeConverter.java  |  65 +--
 ql/src/test/queries/clientpositive/pointlookup.q   |  33 
 ql/src/test/queries/clientpositive/udf_in.q|   6 +
 .../results/clientpositive/llap/pointlookup.q.out  | 193 +
 .../test/results/clientpositive/llap/udf_in.q.out  |  37 
 .../hadoop/hive/serde2/io/HiveCharWritable.java|   9 +
 .../hadoop/hive/serde2/io/HiveVarcharWritable.java |   9 +
 8 files changed, 372 insertions(+), 12 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java
index 40069cf8b0c..28207714e3c 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java
@@ -36,12 +36,17 @@ import 
org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport;
 import org.apache.hadoop.hive.serde.serdeConstants;
 import org.apache.hadoop.hive.serde2.io.DateWritableV2;
 import org.apache.hadoop.hive.serde2.io.DoubleWritable;
+import org.apache.hadoop.hive.serde2.io.HiveCharWritable;
 import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
+import org.apache.hadoop.hive.serde2.io.HiveVarcharWritable;
 import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hadoop.hive.serde2.typeinfo.CharTypeInfo;
 import org.apache.hadoop.hive.serde2.typeinfo.DecimalTypeInfo;
 import org.apache.hadoop.hive.serde2.typeinfo.HiveDecimalUtils;
+import org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils;
+import org.apache.hadoop.hive.serde2.typeinfo.VarcharTypeInfo;
 import org.apache.hadoop.io.BooleanWritable;
 import org.apache.hadoop.io.BytesWritable;
 import org.apache.hadoop.io.FloatWritable;
@@ -481,7 +486,32 @@ public enum ETypeConverter {
   },
   ESTRING_CONVERTER(String.class) {
 @Override
-PrimitiveConverter getConverter(final PrimitiveType type, final int index, 
final ConverterParent parent, TypeInfo hiveTypeInfo) {
+PrimitiveConverter getConverter(final PrimitiveType type, final int index, 
final ConverterParent parent,
+TypeInfo hiveTypeInfo) {
+  // If we have type information, we should return properly typed strings. 
However, there are a variety
+  // of code paths that do not provide the typeInfo in those cases we 
default to Text. This idiom is also
+  // followed by for example the BigDecimal converter in which if there is 
no type information,
+  // it defaults to the widest representation
+  if (hiveTypeInfo instanceof PrimitiveTypeInfo) {
+PrimitiveTypeInfo t = (PrimitiveTypeInfo) hiveTypeInfo;
+switch (t.getPrimitiveCategory()) {
+  case CHAR:
+return new BinaryConverter(type, parent, index) {
+  @Override
+  protected HiveCharWritable convert(Binary binary) {
+return new HiveCharWritable(binary.getBytes(), ((CharTypeInfo) 
hiveTypeInfo).getLength());
+  }
+};
+  case VARCHAR:
+return new BinaryConverter(type, parent, 
index) {
+  @Override
+  protected HiveVarcharWritable convert(Binary binary) {
+return new HiveVarcharWritable(binary.getBytes(), 
((VarcharTypeInfo) hiveTypeInfo).getLength());
+  }
+};
+}
+  }
+  // STRING type
   return new BinaryConverter(type, parent, index) {
 @Override
 protected Text convert(Binary binary) {
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
index 8161430a501..fcfb5c7782c 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert/TestETypeConverter.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/io/parquet/convert

[hive] branch master updated: HIVE-26549: WebHCat servers fails to start due to authentication filter configuration (Zhiguo Wu reviewed by Stamatis Zampetakis)

2022-09-23 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 7f29593b086 HIVE-26549: WebHCat servers fails to start due to 
authentication filter configuration (Zhiguo Wu reviewed by Stamatis Zampetakis)
7f29593b086 is described below

commit 7f29593b0869317fcc0c1d3cd2add95799c1c2f3
Author: wzg547228197 
AuthorDate: Wed Sep 21 00:41:10 2022 +0800

HIVE-26549: WebHCat servers fails to start due to authentication filter 
configuration (Zhiguo Wu reviewed by Stamatis Zampetakis)

Closes #3609
---
 .../org/apache/hive/hcatalog/templeton/Main.java   | 25 ++
 1 file changed, 16 insertions(+), 9 deletions(-)

diff --git 
a/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java
 
b/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java
index d2776ffbfd1..66fa5eb4ae8 100644
--- 
a/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java
+++ 
b/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java
@@ -32,6 +32,8 @@ import java.util.HashMap;
 import java.util.Objects;
 import java.util.Set;
 
+import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
+import 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.apache.commons.lang3.StringUtils;
@@ -306,18 +308,23 @@ public class Main {
   public FilterHolder makeAuthFilter() throws IOException {
 FilterHolder authFilter = new FilterHolder(AuthFilter.class);
 UserNameHandler.allowAnonymous(authFilter);
+  
+String confPrefix = "dfs.web.authentication";
+String prefix = confPrefix + ".";
+authFilter.setInitParameter(AuthenticationFilter.CONFIG_PREFIX, 
confPrefix);
+authFilter.setInitParameter(prefix + AuthenticationFilter.COOKIE_PATH, 
"/");
+
 if (UserGroupInformation.isSecurityEnabled()) {
-  
//http://hadoop.apache.org/docs/r1.1.1/api/org/apache/hadoop/security/authentication/server/AuthenticationFilter.html
-  authFilter.setInitParameter("dfs.web.authentication.signature.secret",
-conf.kerberosSecret());
-  
//https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2/src/packages/templates/conf/hdfs-site.xml
+  authFilter.setInitParameter(prefix + AuthenticationFilter.AUTH_TYPE, 
KerberosAuthenticationHandler.TYPE);
+  
   String serverPrincipal = 
SecurityUtil.getServerPrincipal(conf.kerberosPrincipal(), "0.0.0.0");
-  authFilter.setInitParameter("dfs.web.authentication.kerberos.principal",
-serverPrincipal);
-  
//http://https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2/src/packages/templates/conf/hdfs-site.xml
-  authFilter.setInitParameter("dfs.web.authentication.kerberos.keytab",
-conf.kerberosKeytab());
+  authFilter.setInitParameter(prefix + 
KerberosAuthenticationHandler.PRINCIPAL, serverPrincipal);
+  authFilter.setInitParameter(prefix + 
KerberosAuthenticationHandler.KEYTAB, conf.kerberosKeytab());
+  authFilter.setInitParameter(prefix + 
AuthenticationFilter.SIGNATURE_SECRET, conf.kerberosSecret());
+} else {
+  authFilter.setInitParameter(prefix + AuthenticationFilter.AUTH_TYPE, 
PseudoAuthenticationHandler.TYPE);
 }
+
 return authFilter;
   }
 



[calcite] branch main updated: [CALCITE-4972] Subfields of array columns containing structs are not qualified in getFieldOrigins

2022-09-23 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new 7b8b2b9604 [CALCITE-4972] Subfields of array columns containing 
structs are not qualified in getFieldOrigins
7b8b2b9604 is described below

commit 7b8b2b96041d0cf7bf69cae336659087739fa495
Author: Mark Grey 
AuthorDate: Fri Jan 7 16:22:10 2022 -0500

[CALCITE-4972] Subfields of array columns containing structs are not 
qualified in getFieldOrigins

Close apache/calcite#2683
---
 .../calcite/sql/validate/SqlValidatorImpl.java | 16 +
 .../calcite/sql/validate/UnnestNamespace.java  | 27 ++
 .../org/apache/calcite/test/SqlValidatorTest.java  | 13 +++
 3 files changed, 56 insertions(+)

diff --git 
a/core/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java 
b/core/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java
index c53803e8fd..cd272f649b 100644
--- a/core/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java
+++ b/core/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java
@@ -6145,6 +6145,13 @@ public class SqlValidatorImpl implements 
SqlValidatorWithHints {
 scope.fullyQualify((SqlIdentifier) selectItem);
 SqlValidatorNamespace namespace = requireNonNull(qualified.namespace,
 () -> "namespace for " + qualified);
+if (namespace.isWrapperFor(AliasNamespace.class)) {
+  AliasNamespace aliasNs = namespace.unwrap(AliasNamespace.class);
+  SqlNode aliased = requireNonNull(aliasNs.getNode(), () ->
+  "sqlNode for aliasNs " + aliasNs);
+  namespace = getNamespaceOrThrow(stripAs(aliased));
+}
+
 final SqlValidatorTable table = namespace.getTable();
 if (table == null) {
   return null;
@@ -6152,7 +6159,16 @@ public class SqlValidatorImpl implements 
SqlValidatorWithHints {
 final List origin =
 new ArrayList<>(table.getQualifiedName());
 for (String name : qualified.suffix()) {
+  if (namespace.isWrapperFor(UnnestNamespace.class)) {
+// If identifier is drawn from a repeated subrecord via unnest, 
add name of array field
+UnnestNamespace unnestNamespace = 
namespace.unwrap(UnnestNamespace.class);
+final SqlQualified columnUnnestedFrom = 
unnestNamespace.getColumnUnnestedFrom(name);
+if (columnUnnestedFrom != null) {
+  origin.addAll(columnUnnestedFrom.suffix());
+}
+  }
   namespace = namespace.lookupChild(name);
+
   if (namespace == null) {
 return null;
   }
diff --git 
a/core/src/main/java/org/apache/calcite/sql/validate/UnnestNamespace.java 
b/core/src/main/java/org/apache/calcite/sql/validate/UnnestNamespace.java
index 531ba53a90..f3e732d993 100644
--- a/core/src/main/java/org/apache/calcite/sql/validate/UnnestNamespace.java
+++ b/core/src/main/java/org/apache/calcite/sql/validate/UnnestNamespace.java
@@ -64,6 +64,33 @@ class UnnestNamespace extends AbstractNamespace {
 return null;
   }
 
+  /**
+   * Given a field name from SelectScope, find the column in this
+   * UnnestNamespace it originates from.
+   *
+   * @param queryFieldName Name of column
+   * @return A SqlQualified if subfield comes from this unnest, null if not 
found
+   */
+  @Nullable SqlQualified getColumnUnnestedFrom(String queryFieldName) {
+for (SqlNode operand : unnest.getOperandList()) {
+  // Ignore operands that are inline ARRAY[] literals
+  if (operand instanceof SqlIdentifier) {
+final SqlIdentifier id = (SqlIdentifier) operand;
+final SqlQualified qualified = this.scope.fullyQualify(id);
+RelDataType dataType = 
this.scope.resolveColumn(qualified.suffix().get(0), id);
+if (dataType != null) {
+  RelDataType repeatedEntryType = dataType.getComponentType();
+  if (repeatedEntryType != null
+  && repeatedEntryType.isStruct()
+  && repeatedEntryType.getFieldNames().contains(queryFieldName)) {
+return qualified;
+  }
+}
+  }
+}
+return null;
+  }
+
   @Override protected RelDataType validateImpl(RelDataType targetRowType) {
 // Validate the call and its arguments, and infer the return type.
 validator.validateCall(unnest, scope);
diff --git a/core/src/test/java/org/apache/calcite/test/SqlValidatorTest.java 
b/core/src/test/java/org/apache/calcite/test/SqlValidatorTest.java
index 00dd5e5d7b..be16adca82 100644
--- a/core/src/test/java/org/apache/calcite/test/SqlValidatorTest.java
+++ b/core/src/test/java/org/apache/calcite/test/SqlValidatorTest.java
@@ -9210,6 +9210,19 @@ public class SqlValidatorTest extends 
SqlValidatorTestCase {

[calcite] branch main updated: [CALCITE-5293] Support general set operators in PruneEmptyRules

2022-09-22 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new f8dd80fcd2 [CALCITE-5293] Support general set operators in 
PruneEmptyRules
f8dd80fcd2 is described below

commit f8dd80fcd2d4d92767936fe7b3dae349f2f0ec40
Author: kasakrisz 
AuthorDate: Tue Sep 20 17:40:14 2022 +0200

[CALCITE-5293] Support general set operators in PruneEmptyRules

Close apache/calcite#2915
---
 .../apache/calcite/rel/rules/PruneEmptyRules.java  | 28 +++---
 site/_docs/history.md  |  4 
 2 files changed, 18 insertions(+), 14 deletions(-)

diff --git 
a/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java 
b/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java
index 57b34dd368..cf8884a091 100644
--- a/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java
+++ b/core/src/main/java/org/apache/calcite/rel/rules/PruneEmptyRules.java
@@ -27,14 +27,14 @@ import org.apache.calcite.rel.RelNode;
 import org.apache.calcite.rel.SingleRel;
 import org.apache.calcite.rel.core.Aggregate;
 import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.core.Intersect;
 import org.apache.calcite.rel.core.Join;
 import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.core.Minus;
 import org.apache.calcite.rel.core.Project;
 import org.apache.calcite.rel.core.Sort;
+import org.apache.calcite.rel.core.Union;
 import org.apache.calcite.rel.core.Values;
-import org.apache.calcite.rel.logical.LogicalIntersect;
-import org.apache.calcite.rel.logical.LogicalMinus;
-import org.apache.calcite.rel.logical.LogicalUnion;
 import org.apache.calcite.rel.logical.LogicalValues;
 import org.apache.calcite.rex.RexDynamicParam;
 import org.apache.calcite.rex.RexLiteral;
@@ -81,7 +81,7 @@ public abstract class PruneEmptyRules {
 
   /**
* Rule that removes empty children of a
-   * {@link org.apache.calcite.rel.logical.LogicalUnion}.
+   * {@link org.apache.calcite.rel.core.Union}.
*
* Examples:
*
@@ -94,7 +94,7 @@ public abstract class PruneEmptyRules {
   public static final RelOptRule UNION_INSTANCE =
   ImmutableUnionEmptyPruneRuleConfig.of()
   .withOperandSupplier(b0 ->
-  b0.operand(LogicalUnion.class).unorderedInputs(b1 ->
+  b0.operand(Union.class).unorderedInputs(b1 ->
   b1.operand(Values.class)
   .predicate(Values::isEmpty).noInputs()))
   .withDescription("Union")
@@ -103,7 +103,7 @@ public abstract class PruneEmptyRules {
 
   /**
* Rule that removes empty children of a
-   * {@link org.apache.calcite.rel.logical.LogicalMinus}.
+   * {@link org.apache.calcite.rel.core.Minus}.
*
* Examples:
*
@@ -115,7 +115,7 @@ public abstract class PruneEmptyRules {
   public static final RelOptRule MINUS_INSTANCE =
   ImmutableMinusEmptyPruneRuleConfig.of()
   .withOperandSupplier(b0 ->
-  b0.operand(LogicalMinus.class).unorderedInputs(b1 ->
+  b0.operand(Minus.class).unorderedInputs(b1 ->
   b1.operand(Values.class).predicate(Values::isEmpty)
   .noInputs()))
   .withDescription("Minus")
@@ -123,7 +123,7 @@ public abstract class PruneEmptyRules {
 
   /**
* Rule that converts a
-   * {@link org.apache.calcite.rel.logical.LogicalIntersect} to
+   * {@link org.apache.calcite.rel.core.Intersect} to
* empty if any of its children are empty.
*
* Examples:
@@ -136,7 +136,7 @@ public abstract class PruneEmptyRules {
   public static final RelOptRule INTERSECT_INSTANCE =
   ImmutableIntersectEmptyPruneRuleConfig.of()
   .withOperandSupplier(b0 ->
-  b0.operand(LogicalIntersect.class).unorderedInputs(b1 ->
+  b0.operand(Intersect.class).unorderedInputs(b1 ->
   b1.operand(Values.class).predicate(Values::isEmpty)
   .noInputs()))
   .withDescription("Intersect")
@@ -164,7 +164,7 @@ public abstract class PruneEmptyRules {
   }
 
   /**
-   * Rule that converts a {@link org.apache.calcite.rel.logical.LogicalProject}
+   * Rule that converts a {@link org.apache.calcite.rel.core.Project}
* to empty if its child is empty.
*
* Examples:
@@ -180,7 +180,7 @@ public abstract class PruneEmptyRules {
   .toRule();
 
   /**
-   * Rule that converts a {@link org.apache.calcite.rel.logical.LogicalFilter}
+   * Rule that converts a {@link org.apache.calcite.rel.core.Filter}
* to empty if its child is empty.
*
* Examples:
@@ -368,7 +368,7 @@ public abstract class PruneEmptyRules {
 @Override default PruneEmptyRule toRule() {
   return new PruneEmptyRule(this) {
 @Override pu

[hive] branch master updated: HIVE-26404: HMS memory leak when compaction cleaner fails to remove obsolete files (Stamatis Zampetakis reviewed by Denys Kuzmenko)

2022-09-21 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 3de78df9042 HIVE-26404: HMS memory leak when compaction cleaner fails 
to remove obsolete files (Stamatis Zampetakis reviewed by Denys Kuzmenko)
3de78df9042 is described below

commit 3de78df9042e4d364aea019b9a16691bcf51ea9b
Author: Stamatis Zampetakis 
AuthorDate: Fri Aug 5 19:59:41 2022 +0300

HIVE-26404: HMS memory leak when compaction cleaner fails to remove 
obsolete files (Stamatis Zampetakis reviewed by Denys Kuzmenko)

Closes #3514
---
 .../ql/txn/compactor/TestCleanerWithSecureDFS.java | 184 +
 .../txn/compactor/TestCleanerWithReplication.java  |  21 +--
 .../hadoop/hive/ql/txn/compactor/Cleaner.java  |  19 ++-
 .../hive/ql/txn/compactor/CompactorTest.java   |  28 ++--
 4 files changed, 213 insertions(+), 39 deletions(-)

diff --git 
a/itests/hive-minikdc/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCleanerWithSecureDFS.java
 
b/itests/hive-minikdc/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCleanerWithSecureDFS.java
new file mode 100644
index 000..ecf8472ea99
--- /dev/null
+++ 
b/itests/hive-minikdc/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCleanerWithSecureDFS.java
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.txn.compactor;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.CompactionRequest;
+import org.apache.hadoop.hive.metastore.api.CompactionType;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.http.HttpConfig;
+import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.security.SecurityUtil;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.ssl.KeyStoreTestUtil;
+
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.mockito.internal.util.reflection.FieldSetter;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.UUID;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import static 
org.apache.hadoop.fs.CommonConfigurationKeys.IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HTTPS_ADDRESS_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KERBEROS_PRINCIPAL_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KEYTAB_FILE_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATA_TRANSFER_PROTECTION_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_HTTP_POLICY_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_HTTPS_ADDRESS_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY;
+
+public class TestCleanerWithSecureDFS extends CompactorTest {
+  private static final Path KEYSTORE_DIR =
+  Paths.get(System.getProperty("test.tmp.dir"), "kdc_root_dir" + 
UUID.randomUUID());
+  private static final String SUPER_USER_NAME = "hdfs";
+  private static final Path SUPER_USER_KEYTAB = 
KEYSTORE_DIR.resolve(SUPER_USER_NAME + ".keytab");
+
+  private static MiniDFSCluster dfsCluster = null;
+  private static MiniKdc kdc = null;
+  private static HiveConf secureConf = null;
+
+  private static MiniKdc initKDC() {
+try {
+  MiniKdc kdc = new MiniKdc(MiniKdc.createConf(), KEYSTORE_DIR.toFile());
+  kdc.start();
+  kdc.createPrincipal(SUPER_USER_KEYTAB.toFile(), SUPER_USER_NAME + 
"/localhost&

[hive] branch master updated: HIVE-26541: NPE when starting WebHCat Service (Zhiguo Wu reviewed by Stamatis Zampetakis)

2022-09-20 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 05afad7184b HIVE-26541: NPE when starting WebHCat Service (Zhiguo Wu 
reviewed by Stamatis Zampetakis)
05afad7184b is described below

commit 05afad7184b8f342b7442d517a56a5a35b75b53f
Author: Zhiguo Wu 
AuthorDate: Tue Aug 23 14:36:06 2022 +0800

HIVE-26541: NPE when starting WebHCat Service (Zhiguo Wu reviewed by 
Stamatis Zampetakis)

Closes #3543
---
 .../svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java
 
b/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java
index 95147883640..d2776ffbfd1 100644
--- 
a/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java
+++ 
b/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java
@@ -249,7 +249,7 @@ public class Main {
 low.setLowResourcesIdleTimeout(1);
 server.addBean(low);
 
-server.addConnector(createChannelConnector());
+server.setConnectors(new Connector[]{ createChannelConnector(server) });
 
 // Start the server
 server.start();
@@ -276,7 +276,7 @@ public class Main {
Create a channel connector for "http/https" requests.
*/
 
-  private Connector createChannelConnector() {
+  private Connector createChannelConnector(Server server) {
 ServerConnector connector;
 final HttpConfiguration httpConf = new HttpConfiguration();
 httpConf.setRequestHeaderSize(1024 * 64);



[calcite] branch site updated: [CALCITE-5287] SQL reference page is missing from website

2022-09-16 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch site
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/site by this push:
 new e085b3f8dc [CALCITE-5287] SQL reference page is missing from website
e085b3f8dc is described below

commit e085b3f8dcdf0d080c66f2b261641585146a80e2
Author: Stamatis Zampetakis 
AuthorDate: Fri Sep 16 18:20:56 2022 +0200

[CALCITE-5287] SQL reference page is missing from website

The newline at the beginning of reference.md page seems to be causing
a problem with Jekyll that ignores the .md file and fails to generate
the respective .html file.
---
 site/_docs/reference.md | 1 -
 1 file changed, 1 deletion(-)

diff --git a/site/_docs/reference.md b/site/_docs/reference.md
index 995a4ff3f7..e49c5fa6dd 100644
--- a/site/_docs/reference.md
+++ b/site/_docs/reference.md
@@ -1,4 +1,3 @@
-
 ---
 layout: docs
 title: SQL language



[calcite] branch main updated: [CALCITE-5287] SQL reference page is missing from website

2022-09-16 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new e46dfc619c [CALCITE-5287] SQL reference page is missing from website
e46dfc619c is described below

commit e46dfc619c960bcfbfa46b311a34b4ec7f0685a2
Author: Stamatis Zampetakis 
AuthorDate: Fri Sep 16 18:20:56 2022 +0200

[CALCITE-5287] SQL reference page is missing from website

The newline at the beginning of reference.md page seems to be causing
a problem with Jekyll that ignores the .md file and fails to generate
the respective .html file.
---
 site/_docs/reference.md | 1 -
 1 file changed, 1 deletion(-)

diff --git a/site/_docs/reference.md b/site/_docs/reference.md
index 995a4ff3f7..e49c5fa6dd 100644
--- a/site/_docs/reference.md
+++ b/site/_docs/reference.md
@@ -1,4 +1,3 @@
-
 ---
 layout: docs
 title: SQL language



[hive] branch master updated: HIVE-26461: Add CI build check for macOS (Stamatis Zampetakis reviewed by Ayush Saxena)

2022-09-16 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 8e39937bdb5 HIVE-26461: Add CI build check for macOS (Stamatis 
Zampetakis reviewed by Ayush Saxena)
8e39937bdb5 is described below

commit 8e39937bdb577bc135579d7d34b46ba2d788ca53
Author: Stamatis Zampetakis 
AuthorDate: Wed Aug 10 12:53:07 2022 +0300

HIVE-26461: Add CI build check for macOS (Stamatis Zampetakis reviewed by 
Ayush Saxena)

Closes #3512
---
 .github/workflows/build.yml | 39 +++
 1 file changed, 39 insertions(+)

diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
new file mode 100644
index 000..eca43dd5feb
--- /dev/null
+++ b/.github/workflows/build.yml
@@ -0,0 +1,39 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name: Build CI with different platforms/configs
+
+on:
+  push:
+branches:
+  - 'master'
+  pull_request:
+branches:
+  - 'master'
+
+jobs:
+  macos-jdk8:
+name: 'macOS (JDK 8)'
+runs-on: macos-latest
+steps:
+  - uses: actions/checkout@v2
+  - name: 'Set up JDK 8'
+uses: actions/setup-java@v1
+with:
+  java-version: 8
+  - name: 'Build project'
+run: |
+  mvn clean install -DskipTests -Pitests



[hive] branch master updated: HIVE-26277: NPEs and rounding issues in ColumnStatsAggregator classes (Alessandro Solimando reviewed by Stamatis Zampetakis)

2022-09-16 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new b6cbb2e6a2f HIVE-26277: NPEs and rounding issues in 
ColumnStatsAggregator classes (Alessandro Solimando reviewed by Stamatis 
Zampetakis)
b6cbb2e6a2f is described below

commit b6cbb2e6a2f3d3c5de565492c3f658cbf94d96fb
Author: Alessandro Solimando 
AuthorDate: Fri May 13 17:29:30 2022 +0200

HIVE-26277: NPEs and rounding issues in ColumnStatsAggregator classes 
(Alessandro Solimando reviewed by Stamatis Zampetakis)

1. Add and invoke checkStatisticsList to prevent NPEs in aggregators;
they all rely on a non-empty list of statistics.
2. Cast integers to double in divisions to make computations more
accurate and avoid rounding issues.
3. Align loggers names to match the class they are in and avoid
misleading log messages.
4. Add documentation for ndvtuner based on current understanding of how
it should work.

Closes #3339

Move (and complete) ndvTuner documentation from tests to production classes
---
 .../aggr/BinaryColumnStatsAggregator.java  |   2 +
 .../aggr/BooleanColumnStatsAggregator.java |   2 +
 .../columnstats/aggr/ColumnStatsAggregator.java|  19 ++
 .../aggr/DateColumnStatsAggregator.java|  14 +-
 .../aggr/DecimalColumnStatsAggregator.java |   5 +-
 .../aggr/DoubleColumnStatsAggregator.java  |   2 +
 .../aggr/LongColumnStatsAggregator.java|  10 +-
 .../aggr/StringColumnStatsAggregator.java  |   4 +-
 .../aggr/TimestampColumnStatsAggregator.java   |  14 +-
 .../hadoop/hive/metastore/StatisticsTestUtils.java | 112 +
 .../metastore/columnstats/ColStatsBuilder.java | 187 ++
 .../aggr/BinaryColumnStatsAggregatorTest.java  | 101 
 .../aggr/BooleanColumnStatsAggregatorTest.java | 101 
 .../aggr/DateColumnStatsAggregatorTest.java| 270 
 .../aggr/DecimalColumnStatsAggregatorTest.java | 256 +++
 .../aggr/DoubleColumnStatsAggregatorTest.java  | 242 ++
 .../aggr/LongColumnStatsAggregatorTest.java| 242 ++
 .../aggr/StringColumnStatsAggregatorTest.java  | 188 ++
 .../aggr/TimestampColumnStatsAggregatorTest.java   | 273 +
 19 files changed, 2028 insertions(+), 16 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/BinaryColumnStatsAggregator.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/BinaryColumnStatsAggregator.java
index c885cf2d44f..552c91835f7 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/BinaryColumnStatsAggregator.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/BinaryColumnStatsAggregator.java
@@ -32,6 +32,8 @@ public class BinaryColumnStatsAggregator extends 
ColumnStatsAggregator {
   @Override
   public ColumnStatisticsObj aggregate(List 
colStatsWithSourceInfo,
   List partNames, boolean areAllPartsFound) throws MetaException {
+checkStatisticsList(colStatsWithSourceInfo);
+
 ColumnStatisticsObj statsObj = null;
 String colType = null;
 String colName = null;
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/BooleanColumnStatsAggregator.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/BooleanColumnStatsAggregator.java
index 6fafab53e0f..9babeea8510 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/BooleanColumnStatsAggregator.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/BooleanColumnStatsAggregator.java
@@ -32,6 +32,8 @@ public class BooleanColumnStatsAggregator extends 
ColumnStatsAggregator {
   @Override
   public ColumnStatisticsObj aggregate(List 
colStatsWithSourceInfo,
   List partNames, boolean areAllPartsFound) throws MetaException {
+checkStatisticsList(colStatsWithSourceInfo);
+
 ColumnStatisticsObj statsObj = null;
 String colType = null;
 String colName = null;
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/ColumnStatsAggregator.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/ColumnStatsAggregator.java
index c4325763beb..144e71c69ec 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/columnstats

[hive] branch master updated (8ca211f24a0 -> 4cafe1f8b31)

2022-08-15 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from 8ca211f24a0 HIVE-26460: Upgrade Iceberg dependency to 0.14.0 (#3511) 
(Adam Szita, reviewed by Laszlo Pinter)
 new 241dfb1c99e HIVE-26458: Add explicit dependency to commons-dbcp2 in 
hive-exec module (Stamatis Zampetakis, reviewed by Ayush Saxena)
 new 4cafe1f8b31 HIVE-26196: Integrate Sonar analysis for master branch and 
PRs (Alessandro Solimando, reviewed by Stamatis Zampetakis, Zoltan Haindrich)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 Jenkinsfile | 40 
 pom.xml |  2 ++
 ql/pom.xml  |  4 
 3 files changed, 46 insertions(+)



[hive] 01/02: HIVE-26458: Add explicit dependency to commons-dbcp2 in hive-exec module (Stamatis Zampetakis, reviewed by Ayush Saxena)

2022-08-15 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 241dfb1c99e5a1a06608ab65da9cba45834755e6
Author: Stamatis Zampetakis 
AuthorDate: Fri Aug 5 20:05:55 2022 +0300

HIVE-26458: Add explicit dependency to commons-dbcp2 in hive-exec module 
(Stamatis Zampetakis, reviewed by Ayush Saxena)

Closes #3510
---
 ql/pom.xml | 4 
 1 file changed, 4 insertions(+)

diff --git a/ql/pom.xml b/ql/pom.xml
index 6e45fccc7b7..463e8e02f93 100644
--- a/ql/pom.xml
+++ b/ql/pom.xml
@@ -83,6 +83,10 @@
   commons-configuration
   ${commons-configuration.version}
 
+
+  org.apache.commons
+  commons-dbcp2
+
 
   org.apache.commons
   commons-math3



[hive] 02/02: HIVE-26196: Integrate Sonar analysis for master branch and PRs (Alessandro Solimando, reviewed by Stamatis Zampetakis, Zoltan Haindrich)

2022-08-15 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 4cafe1f8b319498a4dcc9705e25afd9a5d73e9bd
Author: Alessandro Solimando 
AuthorDate: Thu Apr 28 12:20:06 2022 +0200

HIVE-26196: Integrate Sonar analysis for master branch and PRs (Alessandro 
Solimando, reviewed by Stamatis Zampetakis, Zoltan Haindrich)

Closes #3254
---
 Jenkinsfile | 40 
 pom.xml |  2 ++
 2 files changed, 42 insertions(+)

diff --git a/Jenkinsfile b/Jenkinsfile
index 09b6cd377d8..fab48ede662 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -103,6 +103,21 @@ df -h
   }
 }
 
+def sonarAnalysis(args) {
+  withCredentials([string(credentialsId: 'sonar', variable: 'SONAR_TOKEN')]) {
+  def mvnCmd = """mvn 
org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar \
+  -Dsonar.organization=apache \
+  -Dsonar.projectKey=apache_hive \
+  -Dsonar.host.url=https://sonarcloud.io \
+  """+args+" -DskipTests -Dit.skipTests -Dmaven.javadoc.skip"
+
+  sh """#!/bin/bash -e
+  sw java 11 && . /etc/profile.d/java.sh
+  export MAVEN_OPTS=-Xmx5G
+  """+mvnCmd
+  }
+}
+
 def hdbPodTemplate(closure) {
   podTemplate(
   containers: [
@@ -304,6 +319,31 @@ tar -xzf 
packaging/target/apache-hive-*-nightly-*-src.tar.gz
 }
   }
   }
+  branches['sonar'] = {
+  executorNode {
+  if(env.CHANGE_BRANCH == 'master') {
+  stage('Prepare') {
+  loadWS();
+  }
+  stage('Sonar') {
+  sonarAnalysis("-Dsonar.branch.name=${CHANGE_BRANCH}")
+  }
+  } else if(env.CHANGE_ID) {
+  stage('Prepare') {
+  loadWS();
+  }
+  stage('Sonar') {
+  
sonarAnalysis("""-Dsonar.pullrequest.github.repository=apache/hive \
+   -Dsonar.pullrequest.key=${CHANGE_ID} \
+   -Dsonar.pullrequest.branch=${CHANGE_BRANCH} 
\
+   -Dsonar.pullrequest.base=${CHANGE_TARGET} \
+   -Dsonar.pullrequest.provider=GitHub""")
+  }
+  } else {
+  echo "Skipping sonar analysis, we only run it on PRs and on the 
master branch"
+  }
+  }
+  }
   for (int i = 0; i < splits.size(); i++) {
 def num = i
 def split = splits[num]
diff --git a/pom.xml b/pom.xml
index 7d6c56994e7..325856ada73 100644
--- a/pom.xml
+++ b/pom.xml
@@ -72,6 +72,8 @@
 .
 standalone
 
${basedir}/${hive.path.to.root}/checkstyle
+
+${project.groupId}:${project.artifactId}
 
 
 



[hive] branch master updated: HIVE-26438: Remove unnecessary optimization in canHandleQbForCbo (Abhay Chennagiri reviewed by John Sherman, Stamatis Zampetakis)

2022-08-10 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new ec284379cba HIVE-26438: Remove unnecessary optimization in 
canHandleQbForCbo (Abhay Chennagiri reviewed by John Sherman, Stamatis 
Zampetakis)
ec284379cba is described below

commit ec284379cba24ad38bee6eac686ccc8fa5b3856b
Author: Abhay Chennagiri 
AuthorDate: Fri Jul 29 17:25:08 2022 -0700

HIVE-26438: Remove unnecessary optimization in canHandleQbForCbo (Abhay 
Chennagiri reviewed by John Sherman, Stamatis Zampetakis)

Closes #3487
---
 .../hadoop/hive/ql/parse/CalcitePlanner.java   | 63 --
 1 file changed, 23 insertions(+), 40 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
index 79dc618a541..7e114283a41 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
@@ -946,7 +946,7 @@ public class CalcitePlanner extends SemanticAnalyzer {
 
 // Now check QB in more detail. canHandleQbForCbo returns null if query can
 // be handled.
-msg = CalcitePlanner.canHandleQbForCbo(queryProperties, conf, true, 
needToLogMessage);
+msg = CalcitePlanner.canHandleQbForCbo(queryProperties, conf, true);
 if (msg == null) {
   return Pair.of(true, msg);
 }
@@ -964,8 +964,6 @@ public class CalcitePlanner extends SemanticAnalyzer {
* @param conf
* @param topLevelQB
*  Does QB corresponds to top most query block?
-   * @param verbose
-   *  Whether return value should be verbose in case of failure.
* @return null if the query can be handled; non-null reason string if it
* cannot be.
*
@@ -974,44 +972,29 @@ public class CalcitePlanner extends SemanticAnalyzer {
* Query
* 2. Nested Subquery will return false for qbToChk.getIsQuery()
*/
-  private static String canHandleQbForCbo(QueryProperties queryProperties, 
HiveConf conf,
-  boolean topLevelQB, boolean verbose) {
-
-if (!queryProperties.hasClusterBy() && !queryProperties.hasDistributeBy()
-&& !(queryProperties.hasSortBy() && queryProperties.hasLimit())
-&& !queryProperties.hasPTF() && !queryProperties.usesScript()
-&& queryProperties.isCBOSupportedLateralViews()) {
-  // Ok to run CBO.
-  return null;
-}
-
+  private static String canHandleQbForCbo(QueryProperties queryProperties,
+  HiveConf conf, boolean topLevelQB) {
+List reasons = new ArrayList<>();
 // Not ok to run CBO, build error message.
-String msg = "";
-if (verbose) {
-  if (queryProperties.hasClusterBy()) {
-msg += "has cluster by; ";
-  }
-  if (queryProperties.hasDistributeBy()) {
-msg += "has distribute by; ";
-  }
-  if (queryProperties.hasSortBy() && queryProperties.hasLimit()) {
-msg += "has sort by with limit; ";
-  }
-  if (queryProperties.hasPTF()) {
-msg += "has PTF; ";
-  }
-  if (queryProperties.usesScript()) {
-msg += "uses scripts; ";
-  }
-  if (queryProperties.hasLateralViews()) {
-msg += "has lateral views; ";
-  }
-  if (msg.isEmpty()) {
-msg += "has some unspecified limitations; ";
-  }
-  msg = msg.substring(0, msg.length() - 2);
+if (queryProperties.hasClusterBy()) {
+  reasons.add("has cluster by");
+}
+if (queryProperties.hasDistributeBy()) {
+  reasons.add("has distribute by");
+}
+if (queryProperties.hasSortBy() && queryProperties.hasLimit()) {
+  reasons.add("has sort by with limit");
+}
+if (queryProperties.hasPTF()) {
+  reasons.add("has PTF");
+}
+if (queryProperties.usesScript()) {
+  reasons.add("uses scripts");
+}
+if (!queryProperties.isCBOSupportedLateralViews()) {
+  reasons.add("has lateral views");
 }
-return msg;
+return reasons.isEmpty() ? null : String.join("; ", reasons);
   }
 
   /* This method inserts the right profiles into profiles CBO depending
@@ -5025,7 +5008,7 @@ public class CalcitePlanner extends SemanticAnalyzer {
 
   // 0. Check if we can handle the SubQuery;
   // canHandleQbForCbo returns null if the query can be handled.
-  String reason = canHandleQbForCbo(queryProperties, conf, false, 
LOG.isDebugEnabled());
+  String reason = canHandleQbForCbo(queryProperties, conf, false);
   if (reason != null) {
 String msg = "CBO can not handle Sub Query";
 if (LOG.isDebugEnabled()) {



[hive] 01/02: HIVE-26350: IndexOutOfBoundsException when generating splits for external JDBC table with partition column (Stamatis Zampetakis reviewed by Krisztian Kasa, Aman Sinha)

2022-08-02 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit a0c500ba668a74be4acafaf76a4b94eb5353268c
Author: Stamatis Zampetakis 
AuthorDate: Wed Jun 22 18:42:56 2022 +0200

HIVE-26350: IndexOutOfBoundsException when generating splits for external 
JDBC table with partition column (Stamatis Zampetakis reviewed by Krisztian 
Kasa, Aman Sinha)

1. Introduce new API DatabaseAccessor#getColumnTypes to:
* allow fetching column types from the database;
* align with the code using DatabaseAccessor#getColumnNames.
2. Use the new API to find the type of the partition column in
JdbcInputFormat since information is not propagated correctly to
LIST_COLUMN_TYPES and leads to IOBE.
3. Some refactoring in GenericJdbcDatabaseAccessor to avoid duplicate
code with the introduction of the new API.
4. Add test reproducing the IOBE problem, and tests for the new API.
5. Adapt existing tests based on the changes.

Closes #3470
---
 .../apache/hive/storage/jdbc/JdbcInputFormat.java  |  4 +-
 .../hive/storage/jdbc/dao/DatabaseAccessor.java| 19 ++
 .../jdbc/dao/GenericJdbcDatabaseAccessor.java  | 76 +++---
 .../hive/storage/jdbc/TestJdbcInputFormat.java | 25 +--
 .../jdbc/dao/TestGenericJdbcDatabaseAccessor.java  | 58 +
 jdbc-handler/src/test/resources/test_script.sql| 35 +-
 .../jdbc_partition_table_pruned_pcolumn.q  | 23 +++
 .../llap/jdbc_partition_table_pruned_pcolumn.q.out | 52 +++
 8 files changed, 273 insertions(+), 19 deletions(-)

diff --git 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcInputFormat.java 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcInputFormat.java
index 14c5a777965..ecb7b2ec0cc 100644
--- 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcInputFormat.java
+++ 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcInputFormat.java
@@ -20,10 +20,8 @@ import org.apache.commons.lang3.tuple.Pair;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hive.conf.Constants;
 import org.apache.hadoop.hive.ql.io.HiveInputFormat;
-import org.apache.hadoop.hive.serde.serdeConstants;
 import org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
-import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils;
 import org.apache.hadoop.io.LongWritable;
 import org.apache.hadoop.io.MapWritable;
 import org.apache.hadoop.mapred.FileInputFormat;
@@ -98,7 +96,7 @@ public class JdbcInputFormat extends 
HiveInputFormat
 if (!columnNames.contains(partitionColumn)) {
   throw new IOException("Cannot find partitionColumn:" + 
partitionColumn + " in " + columnNames);
 }
-List hiveColumnTypesList = 
TypeInfoUtils.getTypeInfosFromTypeString(job.get(serdeConstants.LIST_COLUMN_TYPES));
+List hiveColumnTypesList = dbAccessor.getColumnTypes(job);
 TypeInfo typeInfo = 
hiveColumnTypesList.get(columnNames.indexOf(partitionColumn));
 if (!(typeInfo instanceof PrimitiveTypeInfo)) {
   throw new IOException(partitionColumn + " is a complex type, only 
primitive type can be a partition column");
diff --git 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/DatabaseAccessor.java
 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/DatabaseAccessor.java
index 654205d1850..11fcfed1939 100644
--- 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/DatabaseAccessor.java
+++ 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/DatabaseAccessor.java
@@ -18,6 +18,7 @@ import org.apache.commons.lang3.tuple.Pair;
 import org.apache.hadoop.conf.Configuration;
 
 import org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo;
+import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
 import org.apache.hadoop.mapreduce.RecordWriter;
 import org.apache.hadoop.mapreduce.TaskAttemptContext;
 import org.apache.hive.storage.jdbc.exception.HiveJdbcDatabaseAccessException;
@@ -29,6 +30,24 @@ public interface DatabaseAccessor {
 
   List getColumnNames(Configuration conf) throws 
HiveJdbcDatabaseAccessException;
 
+  /**
+   * Returns a list of types for the columns in the specified configuration.
+   *
+   * The type must represent as close as possible the respective type of the 
column stored in the
+   * database. Since it does not exist an exact mapping between database types 
and Hive types the
+   * result is approximate. When it is not possible to derive a type for a 
given column the 
+   * {@link 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory#unknownTypeInfo} is used.
+   *
+   * There is a one-to-one correspondence between the types returned in this 
method and the column
+   * names obtained with {@link #getColumnNames(Configurati

[hive] 02/02: HIVE-26440: Duplicate hive-standalone-metastore-server dependency in QFile module (Stamatis Zampetakis reviewed by Ayush Saxena)

2022-08-02 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit e0f2d287c562423dc2632910aae4f1cd8bcd4b4d
Author: Stamatis Zampetakis 
AuthorDate: Mon Aug 1 15:47:10 2022 +0300

HIVE-26440: Duplicate hive-standalone-metastore-server dependency in QFile 
module (Stamatis Zampetakis reviewed by Ayush Saxena)

Closes #3490
---
 itests/qtest/pom.xml | 6 --
 1 file changed, 6 deletions(-)

diff --git a/itests/qtest/pom.xml b/itests/qtest/pom.xml
index bc58476789d..fc975fe9b28 100644
--- a/itests/qtest/pom.xml
+++ b/itests/qtest/pom.xml
@@ -64,12 +64,6 @@
   tests
   test
 
-
-  org.apache.hive
-  hive-standalone-metastore-server
-  tests
-  test
-
 
   org.apache.hive
   hive-it-custom-serde



[hive] branch master updated (7e2af57fccc -> e0f2d287c56)

2022-08-02 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from 7e2af57fccc HIVE-26425: Skip SSL cert verification for downloading 
JWKS in HS2 (#3473)
 new a0c500ba668 HIVE-26350: IndexOutOfBoundsException when generating 
splits for external JDBC table with partition column (Stamatis Zampetakis 
reviewed by Krisztian Kasa, Aman Sinha)
 new e0f2d287c56 HIVE-26440: Duplicate hive-standalone-metastore-server 
dependency in QFile module (Stamatis Zampetakis reviewed by Ayush Saxena)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 itests/qtest/pom.xml   |  6 --
 .../apache/hive/storage/jdbc/JdbcInputFormat.java  |  4 +-
 .../hive/storage/jdbc/dao/DatabaseAccessor.java| 19 ++
 .../jdbc/dao/GenericJdbcDatabaseAccessor.java  | 76 +++---
 .../hive/storage/jdbc/TestJdbcInputFormat.java | 25 +--
 .../jdbc/dao/TestGenericJdbcDatabaseAccessor.java  | 58 +
 jdbc-handler/src/test/resources/test_script.sql| 35 +-
 .../jdbc_partition_table_pruned_pcolumn.q  | 23 +++
 .../llap/jdbc_partition_table_pruned_pcolumn.q.out | 52 +++
 9 files changed, 273 insertions(+), 25 deletions(-)
 create mode 100644 
ql/src/test/queries/clientpositive/jdbc_partition_table_pruned_pcolumn.q
 create mode 100644 
ql/src/test/results/clientpositive/llap/jdbc_partition_table_pruned_pcolumn.q.out



[calcite] branch main updated: [CALCITE-5221] Upgrade Avatica version to 1.22.0

2022-07-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new 657a3d352f [CALCITE-5221] Upgrade Avatica version to 1.22.0
657a3d352f is described below

commit 657a3d352ff81ef54f2bc0be6884363b49741305
Author: Stamatis Zampetakis 
AuthorDate: Thu Jul 28 15:21:48 2022 +0300

[CALCITE-5221] Upgrade Avatica version to 1.22.0
---
 gradle.properties | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gradle.properties b/gradle.properties
index e30b05d767..9d746286bc 100644
--- a/gradle.properties
+++ b/gradle.properties
@@ -29,7 +29,7 @@ systemProp.org.gradle.internal.publish.checksums.insecure=true
 # Release version can be generated by using -Prelease or -Prc= arguments
 calcite.version=1.31.0
 # This is a version to be used from Maven repository. It can be overridden by 
localAvatica below
-calcite.avatica.version=1.21.0
+calcite.avatica.version=1.22.0
 
 # The options below configures the use of local clone (e.g. testing 
development versions)
 # You can pass un-comment it, or pass option -PlocalReleasePlugins, or 
-PlocalReleasePlugins=



[hive] branch master updated: HIVE-26426: StringIndexOutOfBoundsException in CalcitePlanner#canCBOHandleAst (Abhay Chennagiri reviewed by John Sherman, Stamatis Zampetakis)

2022-07-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 186fb0d85af HIVE-26426: StringIndexOutOfBoundsException in 
CalcitePlanner#canCBOHandleAst (Abhay Chennagiri reviewed by John Sherman, 
Stamatis Zampetakis)
186fb0d85af is described below

commit 186fb0d85af63b61bc10ba5372e35895754b1a6a
Author: Abhay Chennagiri 
AuthorDate: Fri Jul 22 19:14:30 2022 -0700

HIVE-26426: StringIndexOutOfBoundsException in 
CalcitePlanner#canCBOHandleAst (Abhay Chennagiri reviewed by John Sherman, 
Stamatis Zampetakis)

Closes #3474
---
 ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
index 765e2e46463..79dc618a541 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
@@ -950,7 +950,7 @@ public class CalcitePlanner extends SemanticAnalyzer {
 if (msg == null) {
   return Pair.of(true, msg);
 }
-msg = msg.substring(0, msg.length() - 2);
+
 if (needToLogMessage) {
   STATIC_LOG.info("Not invoking CBO because the statement " + msg);
 }
@@ -1006,10 +1006,10 @@ public class CalcitePlanner extends SemanticAnalyzer {
   if (queryProperties.hasLateralViews()) {
 msg += "has lateral views; ";
   }
-
   if (msg.isEmpty()) {
 msg += "has some unspecified limitations; ";
   }
+  msg = msg.substring(0, msg.length() - 2);
 }
 return msg;
   }



[hive] branch master updated: HIVE-26373: ClassCastException when reading timestamps from HBase table with Avro data (Soumyakanti Das reviewed by Stamatis Zampetakis)

2022-07-08 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 97d7630bca1 HIVE-26373: ClassCastException when reading timestamps 
from HBase table with Avro data (Soumyakanti Das reviewed by Stamatis 
Zampetakis)
97d7630bca1 is described below

commit 97d7630bca10e96229519ab397f5cf122e5622e3
Author: Soumyakanti Das 
AuthorDate: Tue Jul 5 15:32:53 2022 -0700

HIVE-26373: ClassCastException when reading timestamps from HBase table 
with Avro data (Soumyakanti Das reviewed by Stamatis Zampetakis)

Closes #3418
---
 data/files/nested_ts.avsc  | 27 
 .../queries/positive/hbase_avro_nested_timestamp.q | 22 ++
 .../positive/hbase_avro_nested_timestamp.q.out | 45 +++
 .../apache/hadoop/hive/hbase/HBaseTestSetup.java   | 51 ++
 .../hive/serde2/avro/AvroLazyObjectInspector.java  |  3 +-
 5 files changed, 147 insertions(+), 1 deletion(-)

diff --git a/data/files/nested_ts.avsc b/data/files/nested_ts.avsc
new file mode 100644
index 000..eac0ad29475
--- /dev/null
+++ b/data/files/nested_ts.avsc
@@ -0,0 +1,27 @@
+{
+  "type": "record",
+  "name": "TableRecord",
+  "namespace": "org.apache.hive",
+  "fields": [
+{
+  "name": "id",
+  "type": "string"
+},
+{
+  "name": "dischargedate",
+  "type": {
+"name": "DateRecord",
+"type": "record",
+"fields": [
+  {
+"name": "value",
+"type": {
+  "type": "long",
+  "logicalType": "timestamp-millis"
+}
+  }
+]
+  }
+}
+  ]
+}
diff --git 
a/hbase-handler/src/test/queries/positive/hbase_avro_nested_timestamp.q 
b/hbase-handler/src/test/queries/positive/hbase_avro_nested_timestamp.q
new file mode 100644
index 000..5f3a22cc51a
--- /dev/null
+++ b/hbase-handler/src/test/queries/positive/hbase_avro_nested_timestamp.q
@@ -0,0 +1,22 @@
+dfs -cp ${system:hive.root}data/files/nested_ts.avsc 
${system:test.tmp.dir}/nested_ts.avsc;
+
+CREATE EXTERNAL TABLE hbase_avro_table(
+`key` string COMMENT '',
+`data_frv4` struct<`id`:string, `dischargedate`:struct<`value`:timestamp>>)
+ROW FORMAT SERDE
+  'org.apache.hadoop.hive.hbase.HBaseSerDe'
+STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
+WITH SERDEPROPERTIES (
+'serialization.format'='1',
+'hbase.columns.mapping' = ':key,data:frV4',
+'data.frV4.serialization.type'='avro',
+'data.frV4.avro.schema.url'='${system:test.tmp.dir}/nested_ts.avsc'
+)
+TBLPROPERTIES (
+'hbase.table.name' = 'HiveAvroTable',
+'hbase.struct.autogenerate'='true');
+
+set hive.vectorized.execution.enabled=false;
+set hive.fetch.task.conversion=none;
+
+select data_frV4.dischargedate.value from hbase_avro_table;
diff --git 
a/hbase-handler/src/test/results/positive/hbase_avro_nested_timestamp.q.out 
b/hbase-handler/src/test/results/positive/hbase_avro_nested_timestamp.q.out
new file mode 100644
index 000..6f08b83e3cf
--- /dev/null
+++ b/hbase-handler/src/test/results/positive/hbase_avro_nested_timestamp.q.out
@@ -0,0 +1,45 @@
+PREHOOK: query: CREATE EXTERNAL TABLE hbase_avro_table(
+`key` string COMMENT '',
+`data_frv4` struct<`id`:string, `dischargedate`:struct<`value`:timestamp>>)
+ROW FORMAT SERDE
+  'org.apache.hadoop.hive.hbase.HBaseSerDe'
+STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
+WITH SERDEPROPERTIES (
+'serialization.format'='1',
+'hbase.columns.mapping' = ':key,data:frV4',
+'data.frV4.serialization.type'='avro',
+ A masked pattern was here 
+)
+TBLPROPERTIES (
+'hbase.table.name' = 'HiveAvroTable',
+'hbase.struct.autogenerate'='true')
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@hbase_avro_table
+POSTHOOK: query: CREATE EXTERNAL TABLE hbase_avro_table(
+`key` string COMMENT '',
+`data_frv4` struct<`id`:string, `dischargedate`:struct<`value`:timestamp>>)
+ROW FORMAT SERDE
+  'org.apache.hadoop.hive.hbase.HBaseSerDe'
+STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
+WITH SERDEPROPERTIES (
+'serialization.format'='1',
+'hbase.columns.mapping' = ':key,data:frV4',
+'data.frV4.serialization.type'='avro',
+ A masked pattern was here 
+)
+TBLPROPERTIES (
+'hbase.table.name' = 'HiveAvroTable',
+'hbase.struct.autogenerate'='true')
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@hbase_avro_table
+PREHOOK: query: select data_frV4.dischargedate.value from hbase_avro_table
+PREHOOK: type: QUERY
+PRE

[hive] branch master updated: HIVE-26349: TestOperatorCmp/TestReOptimization fail silently due to incompatible configuration (Stamatis Zampetakis, reviewed by Peter Vary, Ayush Saxena)

2022-07-06 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 2f619988f69 HIVE-26349: TestOperatorCmp/TestReOptimization fail 
silently due to incompatible configuration (Stamatis Zampetakis, reviewed by 
Peter Vary, Ayush Saxena)
2f619988f69 is described below

commit 2f619988f69a569bfcdc2bef5d35a9ecabb2ef13
Author: Stamatis Zampetakis 
AuthorDate: Wed Jun 22 14:19:30 2022 +0200

HIVE-26349: TestOperatorCmp/TestReOptimization fail silently due to 
incompatible configuration (Stamatis Zampetakis, reviewed by Peter Vary, Ayush 
Saxena)

Closes #3398
---
 ql/src/test/org/apache/hadoop/hive/ql/plan/mapping/TestOperatorCmp.java  | 1 +
 .../test/org/apache/hadoop/hive/ql/plan/mapping/TestReOptimization.java  | 1 +
 2 files changed, 2 insertions(+)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/plan/mapping/TestOperatorCmp.java 
b/ql/src/test/org/apache/hadoop/hive/ql/plan/mapping/TestOperatorCmp.java
index 60241a15ff7..e5fcc3a0d76 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/plan/mapping/TestOperatorCmp.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/plan/mapping/TestOperatorCmp.java
@@ -194,6 +194,7 @@ public class TestOperatorCmp {
 conf.setBoolVar(ConfVars.HIVE_QUERY_REEXECUTION_ENABLED, true);
 conf.setBoolVar(ConfVars.HIVE_VECTORIZATION_ENABLED, false);
 
conf.setBoolVar(ConfVars.HIVE_QUERY_REEXECUTION_ALWAYS_COLLECT_OPERATOR_STATS, 
true);
+conf.setVar(ConfVars.HIVE_CBO_FALLBACK_STRATEGY, "NEVER");
 conf.setVar(ConfVars.HIVE_QUERY_REEXECUTION_STRATEGIES, "reoptimize");
 conf.set("zzz", "1");
 conf.set("reexec.overlay.zzz", "2000");
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/plan/mapping/TestReOptimization.java 
b/ql/src/test/org/apache/hadoop/hive/ql/plan/mapping/TestReOptimization.java
index e283ddda81a..b67385737ef 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/plan/mapping/TestReOptimization.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/plan/mapping/TestReOptimization.java
@@ -294,6 +294,7 @@ public class TestReOptimization {
 
 conf.setBoolVar(ConfVars.HIVE_QUERY_REEXECUTION_ENABLED, true);
 conf.setBoolVar(ConfVars.HIVE_VECTORIZATION_ENABLED, false);
+conf.setVar(ConfVars.HIVE_CBO_FALLBACK_STRATEGY, "NEVER");
 conf.setVar(ConfVars.HIVE_QUERY_REEXECUTION_STRATEGIES, strategies);
 conf.setBoolVar(ConfVars.HIVE_EXPLAIN_USER, true);
 conf.set("zzz", "1");



[hive] branch master updated: HIVE-26021: Change integration tests under DBInstallBase to Checkin tests (Stamatis Zampetakis, reviewed by Peter Vary)

2022-06-23 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 67ef629486b HIVE-26021: Change integration tests under DBInstallBase 
to Checkin tests (Stamatis Zampetakis, reviewed by Peter Vary)
67ef629486b is described below

commit 67ef629486ba38b1d3e0f400bee0073fa3c4e989
Author: Stamatis Zampetakis 
AuthorDate: Mon Jun 20 13:38:20 2022 +0200

HIVE-26021: Change integration tests under DBInstallBase to Checkin tests 
(Stamatis Zampetakis, reviewed by Peter Vary)

Drop failsafe plugin since it is no longer needed.

Rename tests to better reflect their purpose (not integration tests
anymore) but more importantly to allow Jenkins splitTests step (
https://plugins.jenkins.io/parallel-test-executor/) to pick them up
automatically.

Extend test coverage to include mysql and mssql install/upgrade tests.

Closes #3399
---
 Jenkinsfile| 11 +-
 standalone-metastore/DEV-README| 39 --
 standalone-metastore/metastore-server/pom.xml  | 31 -
 .../hive/metastore/dbinstall/DbInstallBase.java|  3 ++
 .../dbinstall/{ITestDerby.java => TestDerby.java}  |  2 +-
 .../dbinstall/{ITestMssql.java => TestMssql.java}  |  2 +-
 .../dbinstall/{ITestMysql.java => TestMysql.java}  |  2 +-
 .../{ITestOracle.java => TestOracle.java}  |  2 +-
 .../{ITestPostgres.java => TestPostgres.java}  |  2 +-
 9 files changed, 22 insertions(+), 72 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 79052c64e42..09b6cd377d8 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -93,7 +93,7 @@ OPTS+=" 
-Dorg.slf4j.simpleLogger.log.org.apache.maven.plugin.surefire.SurefirePl
 OPTS+=" -Dmaven.repo.local=$PWD/.git/m2"
 git config extra.mavenOpts "$OPTS"
 OPTS=" $M_OPTS -Dmaven.test.failure.ignore "
-if [ -s inclusions.txt ]; then OPTS+=" 
-Dsurefire.includesFile=$PWD/inclusions.txt"; sed -i '/\\/ITest/d' 
$PWD/inclusions.txt;fi
+if [ -s inclusions.txt ]; then OPTS+=" 
-Dsurefire.includesFile=$PWD/inclusions.txt";fi
 if [ -s exclusions.txt ]; then OPTS+=" 
-Dsurefire.excludesFile=$PWD/exclusions.txt";fi
 mvn $OPTS '''+args+'''
 du -h --max-depth=1
@@ -279,15 +279,6 @@ reinit_metastore $dbType
 time docker rm -f dev_$dbType || true
 '''
   }
-  stage('verify') {
-try {
-  sh """#!/bin/bash -e
-mvn verify -DskipITests=false -Dit.test=ITest${dbType.capitalize()} 
-Dtest=nosuch -pl standalone-metastore/metastore-server 
-Dmaven.test.failure.ignore -B
-"""
-} finally {
-  junit '**/TEST-*.xml'
-}
-  }
 }
   }
 }
diff --git a/standalone-metastore/DEV-README b/standalone-metastore/DEV-README
index f0f5cacfbcf..731a09bfc06 100644
--- a/standalone-metastore/DEV-README
+++ b/standalone-metastore/DEV-README
@@ -10,7 +10,7 @@ checkin tests before loading a patch.
 To run just the checkin tests:
 'mvn test 
-Dtest.groups=org.apache.hadoop.hive.metastore.annotation.MetastoreCheckinTest'
 
-To run all of the tests (exclusive of the databases tests, see below):
+To run all the tests:
 'mvn test -Dtest.groups=""'.  At the moment this takes around 25 minutes.
 
 When adding a test, if you want it to run as part of the unit tests annotate it
@@ -23,47 +23,34 @@ quick test can be done for the unit tests and more in depth 
testing as part
 of the checkin tests.
 
 

-There are integration tests for testing installation and upgrade of the
+There are checkin tests for testing installation and upgrade of the
 metastore on Derby, MySQL (actually MariaDB is used), Oracle, Postgres, and 
SQLServer.
 
-Each ITest runs two tests, one that installs the latest version of the
+For each DB type we runs two tests, one that installs the latest version of the
 database and one that installs the latest version minus one and then upgrades
 the database.
 
-To run the tests you will need to explicitly turn on integration testing by
-setting skipITests variable to false. The tests rely on Docker so the latter
-needs to be installed and configured properly (e.g., memory more than 3.5GB).
-
-Run all tests:
-
-mvn verify -DskipITests=false -Dtest=nosuch
+The tests (except Derby) rely on Docker so the latter needs to be installed 
and configured
+properly (e.g., memory more than 3.5GB).
 
 Run a single test:
 
-mvn verify -DskipITests=false -Dit.test=ITestMysql -Dtest=nosuch
+mvn test -Dtest.groups=MetastoreCheckinTest -Dtest=TestDerby
 
 Supported databases for testing:
--Dit.test=ITestDerby
--Dit.test=ITestMysql
--Dit.test=ITestOracle
--Dit.test

[hive] branch master updated: HIVE-26343: Disable TestWebHCatE2e test cause it fails

2022-06-21 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 2a0f91e2503 HIVE-26343: Disable TestWebHCatE2e test cause it fails
2a0f91e2503 is described below

commit 2a0f91e2503ac76bad0370b29fbcc95ad162b560
Author: Stamatis Zampetakis 
AuthorDate: Mon Jun 20 16:47:45 2022 +0200

HIVE-26343: Disable TestWebHCatE2e test cause it fails

Closes #3390
---
 .../src/test/java/org/apache/hive/hcatalog/templeton/TestWebHCatE2e.java | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/TestWebHCatE2e.java
 
b/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/TestWebHCatE2e.java
index dc1bb7d9867..ddfa18cc4b1 100644
--- 
a/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/TestWebHCatE2e.java
+++ 
b/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/TestWebHCatE2e.java
@@ -61,6 +61,7 @@ import org.junit.Assert;
  *
  * It may be possible to extend this to more than just DDL later.
  */
+@Ignore("HIVE-26343")
 public class TestWebHCatE2e {
   private static final Logger LOG =
   LoggerFactory.getLogger(TestWebHCatE2e.class);



[hive] branch master updated: HIVE-26310: Remove unused junit runners from test-utils module (Stamatis Zampetakis, reviewed by Ayush Saxena)

2022-06-20 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 948f9fb56a0 HIVE-26310: Remove unused junit runners from test-utils 
module (Stamatis Zampetakis, reviewed by Ayush Saxena)
948f9fb56a0 is described below

commit 948f9fb56a00e981cd653146de44ae82307b4f2f
Author: Stamatis Zampetakis 
AuthorDate: Fri Jun 10 14:29:09 2022 +0200

HIVE-26310: Remove unused junit runners from test-utils module (Stamatis 
Zampetakis, reviewed by Ayush Saxena)

Closes #3358
---
 testutils/pom.xml  |  4 --
 .../junit/runners/ConcurrentTestRunner.java| 62 
 .../junit/runners/model/ConcurrentScheduler.java   | 68 --
 3 files changed, 134 deletions(-)

diff --git a/testutils/pom.xml b/testutils/pom.xml
index 431c4e32601..68a1205ac0e 100644
--- a/testutils/pom.xml
+++ b/testutils/pom.xml
@@ -29,10 +29,6 @@
   
 
 
-
-  com.google.code.tempus-fugit
-  tempus-fugit
-
 
   junit
   junit
diff --git 
a/testutils/src/java/org/apache/hive/testutils/junit/runners/ConcurrentTestRunner.java
 
b/testutils/src/java/org/apache/hive/testutils/junit/runners/ConcurrentTestRunner.java
deleted file mode 100644
index ed474819acc..000
--- 
a/testutils/src/java/org/apache/hive/testutils/junit/runners/ConcurrentTestRunner.java
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- * Copyright (c) 2009-2012, toby weston & tempus-fugit committers
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*
- *
- */
-package org.apache.hive.testutils.junit.runners;
-
-import static java.util.concurrent.Executors.newFixedThreadPool;
-
-import java.util.concurrent.ThreadFactory;
-import java.util.concurrent.atomic.AtomicLong;
-
-import org.apache.hive.testutils.junit.runners.model.ConcurrentScheduler;
-import org.junit.runners.BlockJUnit4ClassRunner;
-import org.junit.runners.model.InitializationError;
-
-/**
- * Originally taken from 
com.google.code.tempusfugit.concurrency.ConcurrentTestRunner
- */
-public class ConcurrentTestRunner extends BlockJUnit4ClassRunner {
-
-  private int numThreads = 1;
-
-  public ConcurrentTestRunner(Class type) throws InitializationError {
-super(type);
-
-String numThreadsProp = System.getProperty("test.concurrency.num.threads");
-if (numThreadsProp != null) {
-  numThreads = Integer.parseInt(numThreadsProp);
-}
-
-setScheduler(new ConcurrentScheduler(newFixedThreadPool(numThreads, new 
ConcurrentTestRunnerThreadFactory(;
-
-System.err.println(">>> ConcurrenTestRunner initialize with " + numThreads 
+ " threads");
-System.err.flush();
-  }
-
-  private static class ConcurrentTestRunnerThreadFactory implements 
ThreadFactory {
-private final AtomicLong count = new AtomicLong();
-
-public Thread newThread(Runnable runnable) {
-  String threadName = ConcurrentTestRunner.class.getSimpleName() + 
"-Thread-" + count.getAndIncrement();
-  System.err.println(">>> ConcurrentTestRunner.newThread " + threadName);
-  System.err.flush();
-  return new Thread(runnable, threadName);
-}
-  }
-}
diff --git 
a/testutils/src/java/org/apache/hive/testutils/junit/runners/model/ConcurrentScheduler.java
 
b/testutils/src/java/org/apache/hive/testutils/junit/runners/model/ConcurrentScheduler.java
deleted file mode 100644
index fa07356848d..000
--- 
a/testutils/src/java/org/apache/hive/testutils/junit/runners/model/ConcurrentScheduler.java
+++ /dev/null
@@ -1,68 +0,0 @@
-/*
- * Copyright (c) 2009-2012, toby weston & tempus-fugit committers
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hive.testutils.juni

[hive] branch master updated: HIVE-26331: Use maven-surefire-plugin version consistently in standalone-metastore modules (Stamatis Zampetakis, reviewed by Zoltan Haindrich, Ayush Saxena)

2022-06-16 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 594a1455122 HIVE-26331: Use maven-surefire-plugin version consistently 
in standalone-metastore modules (Stamatis Zampetakis, reviewed by Zoltan 
Haindrich, Ayush Saxena)
594a1455122 is described below

commit 594a14551227530e60123a1f5d6860883876a4a3
Author: Stamatis Zampetakis 
AuthorDate: Wed Jun 15 13:59:24 2022 +0200

HIVE-26331: Use maven-surefire-plugin version consistently in 
standalone-metastore modules (Stamatis Zampetakis, reviewed by Zoltan 
Haindrich, Ayush Saxena)

Closes #3374
---
 standalone-metastore/metastore-server/pom.xml |  5 -
 standalone-metastore/metastore-tools/pom.xml  | 10 --
 standalone-metastore/pom.xml  |  1 +
 3 files changed, 1 insertion(+), 15 deletions(-)

diff --git a/standalone-metastore/metastore-server/pom.xml 
b/standalone-metastore/metastore-server/pom.xml
index 8ef5116694b..049caf5bf2e 100644
--- a/standalone-metastore/metastore-server/pom.xml
+++ b/standalone-metastore/metastore-server/pom.xml
@@ -438,11 +438,6 @@
 
 
   
-
-  org.apache.maven.plugins
-  maven-surefire-plugin
-  ${maven.surefire.plugin.version}
-
 
   org.apache.maven.plugins
   maven-antrun-plugin
diff --git a/standalone-metastore/metastore-tools/pom.xml 
b/standalone-metastore/metastore-tools/pom.xml
index b61686cb8f5..4d4a4978337 100644
--- a/standalone-metastore/metastore-tools/pom.xml
+++ b/standalone-metastore/metastore-tools/pom.xml
@@ -27,7 +27,6 @@
 tools-common
   
   
-3.0.0-M4
 2.8
 2.3.1
 3.1.0
@@ -139,15 +138,6 @@
 
   
   
-
-  
-
-  org.apache.maven.plugins
-  maven-surefire-plugin
-  ${maven.surefire.plugin.version}
-
-  
-
 
   
   
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index 394763327a4..b00282a0a94 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -443,6 +443,7 @@
 
   org.apache.maven.plugins
   maven-surefire-plugin
+  ${maven.surefire.plugin.version}
   
 false
   



[hive] branch master updated: HIVE-26309: Remove Log4jConfig junit extension in favor of LoggerContextSource (Stamatis Zampetakis, reviewed by Alessandro Solimando, Laszlo Bodor)

2022-06-15 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new da96760e83d HIVE-26309: Remove Log4jConfig junit extension in favor of 
LoggerContextSource (Stamatis Zampetakis, reviewed by Alessandro Solimando, 
Laszlo Bodor)
da96760e83d is described below

commit da96760e83d8a87e1dc5f9d30f7a2ea29307db9d
Author: Stamatis Zampetakis 
AuthorDate: Thu Jun 9 17:19:32 2022 +0200

HIVE-26309: Remove Log4jConfig junit extension in favor of 
LoggerContextSource (Stamatis Zampetakis, reviewed by Alessandro Solimando, 
Laszlo Bodor)

Closes #3356
---
 llap-server/pom.xml|  7 ++
 .../llap/daemon/impl/TestLlapDaemonLogging.java|  7 +-
 testutils/pom.xml  |  5 --
 .../testutils/junit/extensions/Log4jConfig.java| 37 --
 .../junit/extensions/Log4jConfigExtension.java | 83 --
 .../junit/extensions/TestLog4jConfigExtension.java | 65 -
 .../src/test/resources/test0-log4j2.properties | 32 -
 .../src/test/resources/test1-log4j2.properties | 32 -
 8 files changed, 10 insertions(+), 258 deletions(-)

diff --git a/llap-server/pom.xml b/llap-server/pom.xml
index 7d1f93cb46b..42251033679 100644
--- a/llap-server/pom.xml
+++ b/llap-server/pom.xml
@@ -334,6 +334,13 @@
 
   
 
+
+  org.apache.logging.log4j
+  log4j-core
+  ${log4j2.version}
+  tests
+  test
+
 
   junit
   junit
diff --git 
a/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/impl/TestLlapDaemonLogging.java
 
b/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/impl/TestLlapDaemonLogging.java
index 467961a2c43..9b03f0fb162 100644
--- 
a/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/impl/TestLlapDaemonLogging.java
+++ 
b/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/impl/TestLlapDaemonLogging.java
@@ -25,8 +25,9 @@ import org.apache.hadoop.security.Credentials;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hive.testutils.junit.extensions.DoNothingTCPServer;
 import org.apache.hive.testutils.junit.extensions.DoNothingTCPServerExtension;
-import org.apache.hive.testutils.junit.extensions.Log4jConfig;
+import org.apache.logging.log4j.junit.LoggerContextSource;
 import org.apache.tez.common.security.TokenCache;
+
 import org.junit.jupiter.api.Test;
 import org.junit.jupiter.api.extension.ExtendWith;
 
@@ -44,10 +45,10 @@ import static org.junit.jupiter.api.Assertions.assertTrue;
 /**
  * Tests for the log4j configuration of the LLAP daemons.
  */
+@LoggerContextSource("llap-daemon-routing-log4j2.properties")
 public class TestLlapDaemonLogging {
 
   @Test
-  @Log4jConfig("llap-daemon-routing-log4j2.properties")
   @ExtendWith(LlapDaemonExtension.class)
   @ExtendWith(DoNothingTCPServerExtension.class)
   void testQueryRoutingNoLeakFileDescriptors(LlapDaemon daemon, 
DoNothingTCPServer amMockServer)
@@ -74,7 +75,6 @@ public class TestLlapDaemonLogging {
   }
 
   @Test
-  @Log4jConfig("llap-daemon-routing-log4j2.properties")
   @ExtendWith(LlapDaemonExtension.class)
   @ExtendWith(DoNothingTCPServerExtension.class)
   void testQueryRoutingLogFileNameOnIncompleteQuery(LlapDaemon daemon, 
DoNothingTCPServer amMockServer)
@@ -97,7 +97,6 @@ public class TestLlapDaemonLogging {
   }
 
   @Test
-  @Log4jConfig("llap-daemon-routing-log4j2.properties")
   @ExtendWith(LlapDaemonExtension.class)
   @ExtendWith(DoNothingTCPServerExtension.class)
   void testQueryRoutingLogFileNameOnCompleteQuery(LlapDaemon daemon, 
DoNothingTCPServer amMockServer)
diff --git a/testutils/pom.xml b/testutils/pom.xml
index d2d4fe27061..431c4e32601 100644
--- a/testutils/pom.xml
+++ b/testutils/pom.xml
@@ -37,11 +37,6 @@
   junit
   junit
 
-
-  org.apache.logging.log4j
-  log4j-core
-  ${log4j2.version}
-
 
   org.junit.jupiter
   junit-jupiter-api
diff --git 
a/testutils/src/java/org/apache/hive/testutils/junit/extensions/Log4jConfig.java
 
b/testutils/src/java/org/apache/hive/testutils/junit/extensions/Log4jConfig.java
deleted file mode 100644
index dd96941ff42..000
--- 
a/testutils/src/java/org/apache/hive/testutils/junit/extensions/Log4jConfig.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LIC

[hive] branch master updated: HIVE-26238: Decouple sort filter predicates optimization from digest normalization in CBO (Stamatis Zampetakis, reviewed by Zoltan Haindrich)

2022-06-10 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new f29cb2245c9 HIVE-26238: Decouple sort filter predicates optimization 
from digest normalization in CBO (Stamatis Zampetakis, reviewed by Zoltan 
Haindrich)
f29cb2245c9 is described below

commit f29cb2245c97102975ea0dd73783049eaa0947a0
Author: Stamatis Zampetakis 
AuthorDate: Tue May 17 15:20:06 2022 +0200

HIVE-26238: Decouple sort filter predicates optimization from digest 
normalization in CBO (Stamatis Zampetakis, reviewed by Zoltan Haindrich)

1. Decouple sort filter optimization from digest normalization by
refactoring HiveSortFilterPredicates into a (DFS) visitor. We cannot
use planner or rules cause they make use of digest. Performing this
optimization using a visitor slightly simplifies the code since there
is no need to have a registry since we are not going to visit the same
node twice.

2. Move the optimization after all post-join transformations to avoid
having other optimizations cancel the benefit of the sort filter
predicates.

Closes #3299
---
 .../calcite/rules/HiveFilterSortPredicates.java| 47 +++---
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  8 ++--
 .../clientpositive/llap/external_jdbc_table2.q.out |  2 +-
 .../perf/tpcds30tb/tez/cbo_ext_query1.q.out|  4 +-
 .../perf/tpcds30tb/tez/cbo_query1.q.out|  2 +-
 .../perf/tpcds30tb/tez/cbo_query11.q.out   |  8 ++--
 .../perf/tpcds30tb/tez/cbo_query31.q.out   |  2 +-
 .../perf/tpcds30tb/tez/cbo_query33.q.out   |  4 +-
 .../perf/tpcds30tb/tez/cbo_query34.q.out   |  2 +-
 .../perf/tpcds30tb/tez/cbo_query38.q.out   |  4 +-
 .../perf/tpcds30tb/tez/cbo_query4.q.out| 12 +++---
 .../perf/tpcds30tb/tez/cbo_query54.q.out   |  2 +-
 .../perf/tpcds30tb/tez/cbo_query56.q.out   |  4 +-
 .../perf/tpcds30tb/tez/cbo_query6.q.out|  2 +-
 .../perf/tpcds30tb/tez/cbo_query60.q.out   |  4 +-
 .../perf/tpcds30tb/tez/cbo_query65.q.out   |  2 +-
 .../perf/tpcds30tb/tez/cbo_query73.q.out   |  2 +-
 .../perf/tpcds30tb/tez/cbo_query78.q.out   |  2 +-
 .../perf/tpcds30tb/tez/cbo_query81.q.out   |  2 +-
 .../perf/tpcds30tb/tez/query11.q.out   |  4 +-
 .../clientpositive/perf/tpcds30tb/tez/query4.q.out |  6 +--
 21 files changed, 52 insertions(+), 73 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveFilterSortPredicates.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveFilterSortPredicates.java
index 780481f2fd5..6ecf94b5f63 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveFilterSortPredicates.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveFilterSortPredicates.java
@@ -20,8 +20,7 @@ import java.util.Comparator;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.stream.Collectors;
-import org.apache.calcite.plan.RelOptRule;
-import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.RelHomogeneousShuttle;
 import org.apache.calcite.rel.RelNode;
 import org.apache.calcite.rel.core.Filter;
 import org.apache.calcite.rel.metadata.RelMetadataQuery;
@@ -42,49 +41,34 @@ import org.slf4j.LoggerFactory;
 
 
 /**
- * Rule that sorts conditions in a filter predicate to accelerate query 
processing
+ * Sorts conditions in a filter predicate to accelerate query processing
  * based on selectivity and compute cost. Currently it is not applied 
recursively,
  * i.e., it is only applied to top predicates in the condition.
  */
-public class HiveFilterSortPredicates extends RelOptRule {
+public class HiveFilterSortPredicates extends RelHomogeneousShuttle {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(HiveFilterSortPredicates.class);
 
   private final AtomicInteger noColsMissingStats;
 
   public HiveFilterSortPredicates(AtomicInteger noColsMissingStats) {
-super(
-operand(Filter.class,
-operand(RelNode.class, any(;
 this.noColsMissingStats = noColsMissingStats;
   }
 
   @Override
-  public boolean matches(RelOptRuleCall call) {
-final Filter filter = call.rel(0);
-
-HiveRulesRegistry registry = 
call.getPlanner().getContext().unwrap(HiveRulesRegistry.class);
-
-// If this operator has been visited already by the rule,
-// we do not need to apply the optimization
-if (registry != null && registry.getVisited(this).contains(filter)) {
-  return false;
+  public RelNode visit(RelNode other) {
+RelNode visitedNode = super.visit(other);
+if (visitedNode instanceof Filter) {
+  return rewriteFilter((Filter) vis

[hive] 03/03: HIVE-26296: RuntimeException when executing EXPLAIN CBO JOINCOST on query with JDBC tables (Stamatis Zampetakis, reviewed by Alessandro Solimando, Krisztian Kasa)

2022-06-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit efae863fe010ed5c4b7de1874a336ed93b3c60b8
Author: Stamatis Zampetakis 
AuthorDate: Tue Jun 7 17:02:12 2022 +0200

HIVE-26296: RuntimeException when executing EXPLAIN CBO JOINCOST on query 
with JDBC tables (Stamatis Zampetakis, reviewed by Alessandro Solimando, 
Krisztian Kasa)

Compute selectivity for all types of joins in the same way. There is no
particular reason to throw an exception when the Join operator is not
an instance of HiveJoin.

Closes #3349
---
 data/scripts/q_test_author_book_tables.sql | 19 +
 .../calcite/stats/HiveRelMdSelectivity.java|  5 +-
 .../queries/clientpositive/cbo_jdbc_joincost.q | 34 
 .../clientpositive/llap/cbo_jdbc_joincost.q.out| 93 ++
 4 files changed, 147 insertions(+), 4 deletions(-)

diff --git a/data/scripts/q_test_author_book_tables.sql 
b/data/scripts/q_test_author_book_tables.sql
new file mode 100644
index 000..9b5ff99266b
--- /dev/null
+++ b/data/scripts/q_test_author_book_tables.sql
@@ -0,0 +1,19 @@
+create table author
+(
+id int,
+fname   varchar(20),
+lname   varchar(20)
+);
+insert into author values (1, 'Victor', 'Hugo');
+insert into author values (2, 'Alexandre', 'Dumas');
+
+create table book
+(
+id int,
+title  varchar(100),
+author int
+);
+insert into book
+values (1, 'Les Miserables', 1);
+insert into book
+values (2, 'The Count Of Monte Cristo', 2);
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdSelectivity.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdSelectivity.java
index 2c36d8f14e6..19bd13de9a1 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdSelectivity.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdSelectivity.java
@@ -149,11 +149,8 @@ public class HiveRelMdSelectivity extends RelMdSelectivity 
{
   if (j.isSemiJoin() || (j.getJoinType().equals(JoinRelType.ANTI))) {
 ndvEstimate = Math.min(mq.getRowCount(j.getLeft()),
 ndvEstimate);
-  } else if (j instanceof HiveJoin) {
-ndvEstimate = Math.min(mq.getRowCount(j.getLeft())
-* mq.getRowCount(j.getRight()), ndvEstimate);
   } else {
-throw new RuntimeException("Unexpected Join type: " + 
j.getClass().getName());
+ndvEstimate = Math.min(mq.getRowCount(j.getLeft()) * 
mq.getRowCount(j.getRight()), ndvEstimate);
   }
 }
 
diff --git a/ql/src/test/queries/clientpositive/cbo_jdbc_joincost.q 
b/ql/src/test/queries/clientpositive/cbo_jdbc_joincost.q
new file mode 100644
index 000..7255f3b87b0
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/cbo_jdbc_joincost.q
@@ -0,0 +1,34 @@
+--!qt:database:mysql:q_test_author_book_tables.sql
+CREATE EXTERNAL TABLE author
+(
+id int,
+fname varchar(20),
+lname varchar(20)
+)
+STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
+TBLPROPERTIES (
+"hive.sql.database.type" = "MYSQL",
+"hive.sql.jdbc.driver" = "com.mysql.jdbc.Driver",
+"hive.sql.jdbc.url" = "jdbc:mysql://localhost:3306/qtestDB",
+"hive.sql.dbcp.username" = "root",
+"hive.sql.dbcp.password" = "qtestpassword",
+"hive.sql.table" = "author"
+);
+
+CREATE EXTERNAL TABLE book
+(
+id int,
+title varchar(100),
+author int
+)
+STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
+TBLPROPERTIES (
+"hive.sql.database.type" = "MYSQL",
+"hive.sql.jdbc.driver" = "com.mysql.jdbc.Driver",
+"hive.sql.jdbc.url" = "jdbc:mysql://localhost:3306/qtestDB",
+"hive.sql.dbcp.username" = "root",
+"hive.sql.dbcp.password" = "qtestpassword",
+"hive.sql.table" = "book"
+);
+
+EXPLAIN CBO JOINCOST SELECT a.lname, b.title FROM author a JOIN book b ON 
a.id=b.author;
diff --git a/ql/src/test/results/clientpositive/llap/cbo_jdbc_joincost.q.out 
b/ql/src/test/results/clientpositive/llap/cbo_jdbc_joincost.q.out
new file mode 100644
index 000..0dc3effcef3
--- /dev/null
+++ b/ql/src/test/results/clientpositive/llap/cbo_jdbc_joincost.q.out
@@ -0,0 +1,93 @@
+PREHOOK: query: CREATE EXTERNAL TABLE author
+(
+id int,
+fname varchar(20),
+lname varchar(20)
+)
+STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
+TBLPROPERTIES (
+"hive.sql.database.type" = "MYSQL",
+"hive.sql.jdbc.driver" = "com.mysql.jdbc.Driver",
+"hive.sql.jdbc.url" = "jdbc:mysql://localhost:3306/qtestDB",

[hive] branch master updated (c55318eb586 -> efae863fe01)

2022-06-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from c55318eb586 HIVE-26293: Migrate remaining exclusive DDL operations to 
EXCL_WRITE lock & bug fixes (Denys Kuzmenko, reviewed by Peter Vary)
 new d781701d268 HIVE-26278: Add unit tests for Hive#getPartitionsByNames 
using batching (Stamatis Zampetakis, reviewed by Zoltan Haindrich, Krisztian 
Kasa, Ayush Saxena)
 new 798d25c6126 HIVE-26290: Remove useless calls to 
DateTimeFormatter#withZone without assignment (Stamatis Zampetakis, reviewed by 
Ayush Saxena)
 new efae863fe01 HIVE-26296: RuntimeException when executing EXPLAIN CBO 
JOINCOST on query with JDBC tables (Stamatis Zampetakis, reviewed by Alessandro 
Solimando, Krisztian Kasa)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ...hor_table.sql => q_test_author_book_tables.sql} | 11 +++
 .../calcite/stats/HiveRelMdSelectivity.java|  5 +-
 .../ql/udf/generic/GenericUDFFromUnixTime.java |  2 -
 ...TestHiveMetaStoreClientApiArgumentsChecker.java | 25 ++
 .../queries/clientpositive/cbo_jdbc_joincost.q | 34 
 .../clientpositive/llap/cbo_jdbc_joincost.q.out| 93 ++
 6 files changed, 164 insertions(+), 6 deletions(-)
 copy data/scripts/{q_test_author_table.sql => q_test_author_book_tables.sql} 
(50%)
 create mode 100644 ql/src/test/queries/clientpositive/cbo_jdbc_joincost.q
 create mode 100644 
ql/src/test/results/clientpositive/llap/cbo_jdbc_joincost.q.out



[hive] 02/03: HIVE-26290: Remove useless calls to DateTimeFormatter#withZone without assignment (Stamatis Zampetakis, reviewed by Ayush Saxena)

2022-06-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 798d25c61262d872d756b5c73d38172fe1293207
Author: Stamatis Zampetakis 
AuthorDate: Fri Jun 3 18:46:47 2022 +0200

HIVE-26290: Remove useless calls to DateTimeFormatter#withZone without 
assignment (Stamatis Zampetakis, reviewed by Ayush Saxena)

Closes #3342
---
 .../org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java   | 2 --
 1 file changed, 2 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java 
b/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java
index fb634bc7c97..21081cf7c11 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFromUnixTime.java
@@ -88,7 +88,6 @@ public class GenericUDFFromUnixTime extends GenericUDF {
 if (timeZone == null) {
   timeZone = SessionState.get() == null ? new 
HiveConf().getLocalTimeZone() : SessionState.get().getConf()
   .getLocalTimeZone();
-  FORMATTER.withZone(timeZone);
 }
 
 return PrimitiveObjectInspectorFactory.writableStringObjectInspector;
@@ -99,7 +98,6 @@ public class GenericUDFFromUnixTime extends GenericUDF {
 if (context != null) {
   String timeZoneStr = HiveConf.getVar(context.getJobConf(), 
HiveConf.ConfVars.HIVE_LOCAL_TIME_ZONE);
   timeZone = TimestampTZUtil.parseTimeZone(timeZoneStr);
-  FORMATTER.withZone(timeZone);
 }
   }
 



[hive] 01/03: HIVE-26278: Add unit tests for Hive#getPartitionsByNames using batching (Stamatis Zampetakis, reviewed by Zoltan Haindrich, Krisztian Kasa, Ayush Saxena)

2022-06-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit d781701d26859b78161514ac237119243f9bd1e3
Author: Stamatis Zampetakis 
AuthorDate: Tue Feb 8 16:56:56 2022 +0100

HIVE-26278: Add unit tests for Hive#getPartitionsByNames using batching 
(Stamatis Zampetakis, reviewed by Zoltan Haindrich, Krisztian Kasa, Ayush 
Saxena)

Ensure that ValidWriteIdList is set when batching is involved in
getPartitionByNames.

Closes #3335
---
 ...TestHiveMetaStoreClientApiArgumentsChecker.java | 25 ++
 1 file changed, 25 insertions(+)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
index 6aefc44c563..175b47c47d8 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
@@ -38,10 +38,13 @@ import org.apache.hadoop.hive.ql.session.SessionState;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
 import org.apache.thrift.TException;
+
+import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -80,6 +83,8 @@ public class TestHiveMetaStoreClientApiArgumentsChecker {
 hive.getConf().set(ValidTxnList.VALID_TXNS_KEY, "1:");
 hive.getConf().set(ValidWriteIdList.VALID_WRITEIDS_KEY, TABLE_NAME + 
":1:");
 hive.getConf().setVar(HiveConf.ConfVars.HIVE_TXN_MANAGER, 
"org.apache.hadoop.hive.ql.lockmgr.TestTxnManager");
+// Pick a small number for the batch size to easily test code with 
multiple batches.
+hive.getConf().setIntVar(HiveConf.ConfVars.METASTORE_BATCH_RETRIEVE_MAX, 
2);
 SessionState.start(hive.getConf());
 SessionState.get().initTxnMgr(hive.getConf());
 Context ctx = new Context(hive.getConf());
@@ -140,6 +145,26 @@ public class TestHiveMetaStoreClientApiArgumentsChecker {
 hive.getPartitionsByNames(t, new ArrayList<>(), true);
   }
 
+  @Test
+  public void testGetPartitionsByNamesWithSingleBatch() throws HiveException {
+hive.getPartitionsByNames(t, Arrays.asList("Greece", "Italy"), true);
+  }
+
+  @Test
+  public void testGetPartitionsByNamesWithMultipleEqualSizeBatches()
+  throws HiveException {
+List names = Arrays.asList("Greece", "Italy", "France", "Spain");
+hive.getPartitionsByNames(t, names, true);
+  }
+
+  @Test
+  public void testGetPartitionsByNamesWithMultipleUnequalSizeBatches()
+  throws HiveException {
+List names =
+Arrays.asList("Greece", "Italy", "France", "Spain", "Hungary");
+hive.getPartitionsByNames(t, names, true);
+  }
+
   @Test
   public void testGetPartitionsByExpr() throws HiveException, TException {
 List partitions = new ArrayList<>();



[calcite] branch main updated: [CALCITE-4907] JDBC adapter cannot push down join ON TRUE (cartesian product)

2022-06-07 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new a9aea934d [CALCITE-4907] JDBC adapter cannot push down join ON TRUE 
(cartesian product)
a9aea934d is described below

commit a9aea934dc29395ca8ee81df5dcf0d50ac823023
Author: Francesco Gini 
AuthorDate: Sun Nov 28 18:37:05 2021 +

[CALCITE-4907] JDBC adapter cannot push down join ON TRUE (cartesian 
product)

Close apache/calcite#2620
---
 .../org/apache/calcite/adapter/jdbc/JdbcRules.java |  3 ++
 .../org/apache/calcite/test/JdbcAdapterTest.java   | 43 --
 2 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/core/src/main/java/org/apache/calcite/adapter/jdbc/JdbcRules.java 
b/core/src/main/java/org/apache/calcite/adapter/jdbc/JdbcRules.java
index 37fb747a8..6c9b545e6 100644
--- a/core/src/main/java/org/apache/calcite/adapter/jdbc/JdbcRules.java
+++ b/core/src/main/java/org/apache/calcite/adapter/jdbc/JdbcRules.java
@@ -331,6 +331,9 @@ public class JdbcRules {
 private static boolean canJoinOnCondition(RexNode node) {
   final List operands;
   switch (node.getKind()) {
+  case LITERAL:
+// literal on a join condition would be TRUE or FALSE
+return true;
   case AND:
   case OR:
 operands = ((RexCall) node).getOperands();
diff --git a/core/src/test/java/org/apache/calcite/test/JdbcAdapterTest.java 
b/core/src/test/java/org/apache/calcite/test/JdbcAdapterTest.java
index d4bd9dc9c..2b0a5aaa2 100644
--- a/core/src/test/java/org/apache/calcite/test/JdbcAdapterTest.java
+++ b/core/src/test/java/org/apache/calcite/test/JdbcAdapterTest.java
@@ -50,23 +50,25 @@ class JdbcAdapterTest {
* same time. */
   private static final ReentrantLock LOCK = new ReentrantLock();
 
-  /** VALUES is not pushed down, currently. */
+  /** VALUES is pushed down. */
   @Test void testValuesPlan() {
 final String sql = "select * from \"days\", (values 1, 2) as t(c)";
-final String explain = "PLAN="
-+ "EnumerableNestedLoopJoin(condition=[true], joinType=[inner])\n"
-+ "  JdbcToEnumerableConverter\n"
+final String explain = "PLAN=JdbcToEnumerableConverter\n"
++ "  JdbcJoin(condition=[true], joinType=[inner])\n"
 + "JdbcTableScan(table=[[foodmart, days]])\n"
-+ "  EnumerableValues(tuples=[[{ 1 }, { 2 }]])";
++ "JdbcValues(tuples=[[{ 1 }, { 2 }]])";
 final String jdbcSql = "SELECT *\n"
-+ "FROM \"foodmart\".\"days\"";
++ "FROM \"foodmart\".\"days\",\n"
++ "(VALUES (1),\n"
++ "(2)) AS \"t\" (\"C\")";
 CalciteAssert.model(FoodmartSchema.FOODMART_MODEL)
 .query(sql)
 .explainContains(explain)
 .runs()
 .enable(CalciteAssert.DB == CalciteAssert.DatabaseInstance.HSQLDB
 || CalciteAssert.DB == DatabaseInstance.POSTGRESQL)
-.planHasSql(jdbcSql);
+.planHasSql(jdbcSql)
+.returnsCount(14);
   }
 
   @Test void testUnionPlan() {
@@ -360,17 +362,14 @@ class JdbcAdapterTest {
 + "FROM \"SCOTT\".\"DEPT\") AS \"t0\" ON \"t\".\"DEPTNO\" = 
\"t0\".\"DEPTNO\"");
   }
 
-  // JdbcJoin not used for this
   @Test void testCartesianJoinWithoutKeyPlan() {
 CalciteAssert.model(JdbcTest.SCOTT_MODEL)
 .query("select empno, ename, d.deptno, dname\n"
 + "from scott.emp e,scott.dept d")
-.explainContains("PLAN=EnumerableNestedLoopJoin(condition=[true], "
-+ "joinType=[inner])\n"
-+ "  JdbcToEnumerableConverter\n"
+.explainContains("PLAN=JdbcToEnumerableConverter\n"
++ "  JdbcJoin(condition=[true], joinType=[inner])\n"
 + "JdbcProject(EMPNO=[$0], ENAME=[$1])\n"
 + "  JdbcTableScan(table=[[SCOTT, EMP]])\n"
-+ "  JdbcToEnumerableConverter\n"
 + "JdbcProject(DEPTNO=[$0], DNAME=[$1])\n"
 + "  JdbcTableScan(table=[[SCOTT, DEPT]])")
 .runs()
@@ -402,6 +401,26 @@ class JdbcAdapterTest {
 + "FROM \"SCOTT\".\"DEPT\") AS \"t1\" ON \"t0\".\"DEPTNO\" = 
\"t1\".\"DEPTNO\"");
   }
 
+  @Test void testJoinConditionAlwaysTruePushDown() {
+CalciteAssert.model(JdbcTest.SCOTT_MODEL)
+.query("select empno, ename, d.deptno, dname\n"
++ "

[hive] branch master updated: HIVE-26289: Remove useless try catch in DataWritableReadSupport#getWriterDateProleptic (Stamatis Zampetakis, reviewed by Ayush Saxena)

2022-06-06 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new cdb1052e24 HIVE-26289: Remove useless try catch in 
DataWritableReadSupport#getWriterDateProleptic (Stamatis Zampetakis, reviewed 
by Ayush Saxena)
cdb1052e24 is described below

commit cdb1052e24ca493c6486fef3dd8956dde61be834
Author: Stamatis Zampetakis 
AuthorDate: Fri Jun 3 18:14:22 2022 +0200

HIVE-26289: Remove useless try catch in 
DataWritableReadSupport#getWriterDateProleptic (Stamatis Zampetakis, reviewed 
by Ayush Saxena)

Closes #3341
---
 .../hive/ql/io/parquet/read/DataWritableReadSupport.java   | 10 +-
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
index ecdd155a31..cd093dd6a5 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
@@ -278,15 +278,7 @@ public class DataWritableReadSupport extends 
ReadSupport {
   return null;
 }
 String value = 
metadata.get(DataWritableWriteSupport.WRITER_DATE_PROLEPTIC);
-try {
-  if (value != null) {
-return Boolean.valueOf(value);
-  }
-} catch (DateTimeException e) {
-  throw new RuntimeException("Can't parse writer proleptic property stored 
in file metadata", e);
-}
-
-return null;
+return value == null ? null : Boolean.valueOf(value);
   }
   
   /**



[calcite] branch main updated: [CALCITE-4913] Deduplicate correlated variables in SELECT clause

2022-06-06 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new f7aa27ec2 [CALCITE-4913] Deduplicate correlated variables in SELECT 
clause
f7aa27ec2 is described below

commit f7aa27ec22e843c5e27022f99237175d159699bb
Author: korlov42 
AuthorDate: Fri Jun 3 11:43:05 2022 +0300

[CALCITE-4913] Deduplicate correlated variables in SELECT clause

Partial revert of deduplication in LogicalTableFunctionScan
(CALCITE-4673) since the deduplication in Project is more general and
covers the previous use-case as well.

Close apache/calcite#2825
---
 .../apache/calcite/sql2rel/SqlToRelConverter.java  | 23 +--
 .../apache/calcite/test/SqlToRelConverterTest.java | 15 
 .../apache/calcite/test/SqlToRelConverterTest.xml  | 44 +-
 3 files changed, 69 insertions(+), 13 deletions(-)

diff --git 
a/core/src/main/java/org/apache/calcite/sql2rel/SqlToRelConverter.java 
b/core/src/main/java/org/apache/calcite/sql2rel/SqlToRelConverter.java
index 91d223c62..adc19e5d2 100644
--- a/core/src/main/java/org/apache/calcite/sql2rel/SqlToRelConverter.java
+++ b/core/src/main/java/org/apache/calcite/sql2rel/SqlToRelConverter.java
@@ -2696,16 +2696,6 @@ public class SqlToRelConverter {
 validator().getValidatedNodeType(call),
 columnMappings);
 
-final SqlValidatorScope selectScope =
-((DelegatingScope) bb.scope()).getParent();
-final Blackboard seekBb = createBlackboard(selectScope, null, false);
-
-final CorrelationUse p = getCorrelationUse(seekBb, callRel);
-if (p != null) {
-  assert p.r instanceof LogicalTableFunctionScan;
-  callRel = (LogicalTableFunctionScan) p.r;
-}
-
 bb.setRoot(callRel, true);
 afterTableFunction(bb, call, callRel);
   }
@@ -4400,7 +4390,18 @@ public class SqlToRelConverter {
 
 relBuilder.push(bb.root())
 .projectNamed(exprs, fieldNames, true);
-bb.setRoot(relBuilder.build(), false);
+
+RelNode project = relBuilder.build();
+
+final RelNode r;
+final CorrelationUse p = getCorrelationUse(bb, project);
+if (p != null) {
+  r = p.r;
+} else {
+  r = project;
+}
+
+bb.setRoot(r, false);
 
 assert bb.columnMonotonicities.isEmpty();
 bb.columnMonotonicities.addAll(columnMonotonicityList);
diff --git 
a/core/src/test/java/org/apache/calcite/test/SqlToRelConverterTest.java 
b/core/src/test/java/org/apache/calcite/test/SqlToRelConverterTest.java
index 41b38ec74..f4954e6da 100644
--- a/core/src/test/java/org/apache/calcite/test/SqlToRelConverterTest.java
+++ b/core/src/test/java/org/apache/calcite/test/SqlToRelConverterTest.java
@@ -1288,6 +1288,21 @@ class SqlToRelConverterTest extends SqlToRelTestBase {
 + "from emp e");
   }
 
+  @Test void testCorrelatedScalarSubQueryInSelectList() {
+Consumer fn = sql -> {
+  sql(sql).withExpand(true).withDecorrelate(false)
+  .convertsTo("${planExpanded}");
+  sql(sql).withExpand(false).withDecorrelate(false)
+  .convertsTo("${planNotExpanded}");
+};
+fn.accept("select deptno,\n"
++ "  (select min(1) from emp where empno > d.deptno) as i0,\n"
++ "  (select min(0) from emp where deptno = d.deptno "
++ "and ename = 'SMITH'"
++ "and d.deptno > 0) as i1\n"
++ "from dept as d");
+  }
+
   @Test void testCorrelationLateralSubQuery() {
 String sql = "SELECT deptno, ename\n"
 + "FROM\n"
diff --git 
a/core/src/test/resources/org/apache/calcite/test/SqlToRelConverterTest.xml 
b/core/src/test/resources/org/apache/calcite/test/SqlToRelConverterTest.xml
index 7d883fbaa..82465eaad 100644
--- a/core/src/test/resources/org/apache/calcite/test/SqlToRelConverterTest.xml
+++ b/core/src/test/resources/org/apache/calcite/test/SqlToRelConverterTest.xml
@@ -664,6 +664,46 @@ LogicalProject(EMPNO=[$0], JOB=[$2])
 ]]>
 
   
+  
+
+  
+
+
+  
+
+
+  
+
+  
   
 
   

[hive] 02/02: HIVE-26270: Wrong timestamps when reading Hive 3.1.x Parquet files with vectorized reader (Stamatis Zampetakis, reviewed by Peter Vary)

2022-06-03 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit ebaba1436831831e5c60227c476e5fc362012fe4
Author: Stamatis Zampetakis 
AuthorDate: Mon May 30 12:18:06 2022 +0200

HIVE-26270: Wrong timestamps when reading Hive 3.1.x Parquet files with 
vectorized reader (Stamatis Zampetakis, reviewed by Peter Vary)

Closes #3338
---
 data/files/employee_hive_3_1_3_us_pacific.parquet  | Bin 0 -> 446 bytes
 .../ql/io/parquet/ParquetRecordReaderBase.java |   9 +---
 .../io/parquet/read/DataWritableReadSupport.java   |  39 --
 ...rquet_timestamp_int96_compatibility_hive3_1_3.q |  24 +
 ...t_timestamp_int96_compatibility_hive3_1_3.q.out |  60 +
 5 files changed, 110 insertions(+), 22 deletions(-)

diff --git a/data/files/employee_hive_3_1_3_us_pacific.parquet 
b/data/files/employee_hive_3_1_3_us_pacific.parquet
new file mode 100644
index 00..2125bc688d
Binary files /dev/null and b/data/files/employee_hive_3_1_3_us_pacific.parquet 
differ
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java
index 5235edc114..4cc32ae480 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ParquetRecordReaderBase.java
@@ -16,11 +16,9 @@ package org.apache.hadoop.hive.ql.io.parquet;
 import com.google.common.base.Strings;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hive.conf.HiveConf;
-import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
 import org.apache.hadoop.hive.ql.io.IOConstants;
 import org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport;
 import 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetFilterPredicateConverter;
-import org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport;
 import org.apache.hadoop.hive.ql.io.sarg.ConvertAstToSearchArg;
 import org.apache.hadoop.hive.ql.io.sarg.SearchArgument;
 import org.apache.hadoop.hive.serde2.SerDeStats;
@@ -143,11 +141,8 @@ public class ParquetRecordReaderBase {
 skipProlepticConversion = HiveConf.getBoolVar(
 conf, 
HiveConf.ConfVars.HIVE_PARQUET_DATE_PROLEPTIC_GREGORIAN_DEFAULT);
   }
-  legacyConversionEnabled = HiveConf.getBoolVar(conf, 
ConfVars.HIVE_PARQUET_TIMESTAMP_LEGACY_CONVERSION_ENABLED);
-  if 
(fileMetaData.getKeyValueMetaData().containsKey(DataWritableWriteSupport.WRITER_ZONE_CONVERSION_LEGACY))
 {
-legacyConversionEnabled = Boolean.parseBoolean(
-
fileMetaData.getKeyValueMetaData().get(DataWritableWriteSupport.WRITER_ZONE_CONVERSION_LEGACY));
-  }
+  legacyConversionEnabled =
+  
DataWritableReadSupport.getZoneConversionLegacy(fileMetaData.getKeyValueMetaData(),
 conf);
 
   split = new ParquetInputSplit(finalPath,
 splitStart,
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
index 6aa6d2e412..ecdd155a31 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
@@ -289,6 +289,28 @@ public class DataWritableReadSupport extends 
ReadSupport {
 return null;
   }
   
+  /**
+   * Returns whether legacy zone conversion should be used for transforming 
timestamps based on file metadata and
+   * configuration.
+   *
+   * @see ConfVars#HIVE_PARQUET_TIMESTAMP_LEGACY_CONVERSION_ENABLED
+   */
+  public static boolean getZoneConversionLegacy(Map metadata, 
Configuration conf) {
+assert conf != null : "Configuration must not be null";
+if (metadata != null) {
+  if 
(metadata.containsKey(DataWritableWriteSupport.WRITER_ZONE_CONVERSION_LEGACY)) {
+return 
Boolean.parseBoolean(metadata.get(DataWritableWriteSupport.WRITER_ZONE_CONVERSION_LEGACY));
+  }
+  // There are no explicit meta about the legacy conversion
+  if (metadata.containsKey(DataWritableWriteSupport.WRITER_TIMEZONE)) {
+// There is meta about the timezone thus we can infer that when the 
file was written, the new APIs were used.
+return false;
+  }
+}
+// There is no (relevant) metadata in the file, use the configuration
+return HiveConf.getBoolVar(conf, 
ConfVars.HIVE_PARQUET_TIMESTAMP_LEGACY_CONVERSION_ENABLED);
+  }
+
   /**
* Return the columns which contains required nested attribute level
* E.g., given struct a: while 'x' is required and 'y' is not, 
the method will return
@@ -509,21 +531,8 @@ public class DataWritableReadSupport extends 
ReadSupport {
 }
 
 if 
(!metadata.containsKey(DataWritableWriteSupport.WRITER_ZONE_CONV

[hive] branch master updated (d237a30728 -> ebaba14368)

2022-06-03 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from d237a30728 HIVE-26228: Implement Iceberg table rollback feature 
(#3287) (Laszlo Pinter, reviewed by Adam Szita and Peter Vary)
 new 1537084a9a HIVE-26279: Drop unused requests from 
TestHiveMetaStoreClientApiArgumentsChecker (Stamatis Zampetakis, reviewed by 
Ayush Saxena)
 new ebaba14368 HIVE-26270: Wrong timestamps when reading Hive 3.1.x 
Parquet files with vectorized reader (Stamatis Zampetakis, reviewed by Peter 
Vary)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 data/files/employee_hive_3_1_3_us_pacific.parquet  | Bin 0 -> 446 bytes
 .../ql/io/parquet/ParquetRecordReaderBase.java |   9 +---
 .../io/parquet/read/DataWritableReadSupport.java   |  39 --
 ...TestHiveMetaStoreClientApiArgumentsChecker.java |   6 ---
 ...rquet_timestamp_int96_compatibility_hive3_1_3.q |  24 +
 ...t_timestamp_int96_compatibility_hive3_1_3.q.out |  60 +
 6 files changed, 110 insertions(+), 28 deletions(-)
 create mode 100644 data/files/employee_hive_3_1_3_us_pacific.parquet
 create mode 100644 
ql/src/test/queries/clientpositive/parquet_timestamp_int96_compatibility_hive3_1_3.q
 create mode 100644 
ql/src/test/results/clientpositive/llap/parquet_timestamp_int96_compatibility_hive3_1_3.q.out



[hive] 01/02: HIVE-26279: Drop unused requests from TestHiveMetaStoreClientApiArgumentsChecker (Stamatis Zampetakis, reviewed by Ayush Saxena)

2022-06-03 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 1537084a9a4e3278f0ab501d642d74627bf0cca1
Author: Stamatis Zampetakis 
AuthorDate: Tue Feb 8 15:50:28 2022 +0100

HIVE-26279: Drop unused requests from 
TestHiveMetaStoreClientApiArgumentsChecker (Stamatis Zampetakis, reviewed by 
Ayush Saxena)

Closes #3336
---
 .../ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java | 6 --
 1 file changed, 6 deletions(-)

diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
 
b/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
index dfda8a514e..6aefc44c56 100644
--- 
a/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
+++ 
b/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreClientApiArgumentsChecker.java
@@ -132,17 +132,11 @@ public class TestHiveMetaStoreClientApiArgumentsChecker {
 
   @Test
   public void testGetPartitionsByNames2() throws HiveException {
-GetPartitionsByNamesRequest req = new GetPartitionsByNamesRequest();
-req.setDb_name(DB_NAME);
-req.setTbl_name(TABLE_NAME);
 hive.getPartitionsByNames(DB_NAME,TABLE_NAME,null, t);
   }
 
   @Test
   public void testGetPartitionsByNames3() throws HiveException {
-GetPartitionsByNamesRequest req = new GetPartitionsByNamesRequest();
-req.setDb_name(DB_NAME);
-req.setTbl_name(TABLE_NAME);
 hive.getPartitionsByNames(t, new ArrayList<>(), true);
   }
 



[hive] branch master updated: HIVE-26226: Exclude jdk.tools dep from hive-metastore in upgrade-acid (Sylwester Lachiewicz, reviewed by Stamatis Zampetakis)

2022-05-17 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new d9724ad765e HIVE-26226: Exclude jdk.tools dep from hive-metastore in 
upgrade-acid (Sylwester Lachiewicz, reviewed by Stamatis Zampetakis)
d9724ad765e is described below

commit d9724ad765e38c2f29805a7d0a4660c5e467e3e7
Author: Sylwester Lachiewicz 
AuthorDate: Thu May 12 11:58:00 2022 +0200

HIVE-26226: Exclude jdk.tools dep from hive-metastore in upgrade-acid 
(Sylwester Lachiewicz, reviewed by Stamatis Zampetakis)

The jdk.tools jars are not present in java versions > 8 thus there are
build problems when compiling with newer JDKs.

Exclude the dependency from hive-metastore (2.3.3) to avoid compilation
problems in recent JDKs.

It is safe to so because the dependency will still be fetched
transitively by hadoop-common (2.7.2) when the appropriate maven (JDK)
profile is in use.

Closes #3284
---
 upgrade-acid/pre-upgrade/pom.xml | 4 
 1 file changed, 4 insertions(+)

diff --git a/upgrade-acid/pre-upgrade/pom.xml b/upgrade-acid/pre-upgrade/pom.xml
index ea7044b5a69..9fbcee4de0a 100644
--- a/upgrade-acid/pre-upgrade/pom.xml
+++ b/upgrade-acid/pre-upgrade/pom.xml
@@ -71,6 +71,10 @@
   org.apache.curator
   curator-framework
 
+
+  jdk.tools
+  jdk.tools
+
   
 
 



[hive] branch master updated: HIVE-26173: Upgrade derby to 10.14.2.0 (Hemanth Boyina, reviewed by Stamatis Zampetakis)

2022-05-17 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new dfc3689ab08 HIVE-26173: Upgrade derby to 10.14.2.0 (Hemanth Boyina, 
reviewed by Stamatis Zampetakis)
dfc3689ab08 is described below

commit dfc3689ab0828dc51d1cbce1c7cb590619eca816
Author: hemanthboyina 
AuthorDate: Mon Apr 25 22:50:12 2022 +0530

HIVE-26173: Upgrade derby to 10.14.2.0 (Hemanth Boyina, reviewed by 
Stamatis Zampetakis)

Closes #3243
---
 pom.xml  | 2 +-
 standalone-metastore/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/pom.xml b/pom.xml
index dd6ab89a461..bc4e5fa1d83 100644
--- a/pom.xml
+++ b/pom.xml
@@ -121,7 +121,7 @@
 3.6.1
 2.7.0
 1.8
-10.14.1.0
+10.14.2.0
 3.1.0
 
0.1.2
 0.17.1
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index b64d68fe211..1599907c63c 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -65,7 +65,7 @@
 5.2.10
 3.2.0-release
 5.2.10
-10.14.1.0
+10.14.2.0
 2.5.0
 6.2.1.jre8
 8.0.27



[calcite-avatica] branch 1.17.0 created (now d56fcd004)

2022-05-17 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch 1.17.0
in repository https://gitbox.apache.org/repos/asf/calcite-avatica.git


  at d56fcd004 [CALCITE-4068] Prepare for Avatica 1.17.0 release and update 
release history

No new revisions were added by this update.



[hive] branch master updated: HIVE-26205: Incorrect scope for slf4j-api dependency in kafka-handler (Wechar Yu, reviewed by Stamatis Zampetakis)

2022-05-16 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 74e3f29b432 HIVE-26205: Incorrect scope for slf4j-api dependency in 
kafka-handler (Wechar Yu, reviewed by Stamatis Zampetakis)
74e3f29b432 is described below

commit 74e3f29b4328b5df5a1cd90de12ec3e35301afae
Author: wecharyu 
AuthorDate: Sat May 7 20:56:54 2022 +0800

HIVE-26205: Incorrect scope for slf4j-api dependency in kafka-handler 
(Wechar Yu, reviewed by Stamatis Zampetakis)

The classes in the kafka-handler module are using the slf4j-api thus
this dependency must be set at compile scope. Currently it is set at
test scope which makes the build fail in some recent maven versions (
e.g., 3.8.5).

The parent pom declares explicitly slf4j-api at compile scope so
removing all the references from kafka-handler/pom.xml is the way to
go.

Closes #3272
---
 kafka-handler/pom.xml | 21 -
 1 file changed, 21 deletions(-)

diff --git a/kafka-handler/pom.xml b/kafka-handler/pom.xml
index 4f4f8fab040..a9156848f32 100644
--- a/kafka-handler/pom.xml
+++ b/kafka-handler/pom.xml
@@ -43,12 +43,6 @@
   hive-exec
   provided
   ${project.version}
-  
-
-  org.slf4j
-  slf4j-api
-
-  
 
 
   org.apache.hive
@@ -67,10 +61,6 @@
   commons-beanutils
   commons-beanutils
 
-
-  org.slf4j
-  slf4j-api
-
 
   org.yaml
   snakeyaml
@@ -80,12 +70,6 @@
 
   org.apache.hadoop
   hadoop-client
-  
-
-  org.slf4j
-  slf4j-api
-
-  
 
 
   org.apache.kafka
@@ -127,11 +111,6 @@
   zookeeper
   test
 
-
-  org.slf4j
-  slf4j-api
-  test
-
 
   io.confluent
   kafka-avro-serializer



[calcite] branch main updated: [CALCITE-5140] Spark, Piglet tests fail in GitHub CI with OpenJ9

2022-05-12 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/main by this push:
 new bb17ab4a56 [CALCITE-5140] Spark, Piglet tests fail in GitHub CI with 
OpenJ9
bb17ab4a56 is described below

commit bb17ab4a56f5862c1da355f6e2bcf6031a05eca0
Author: Benchao Li 
AuthorDate: Wed May 11 22:10:07 2022 +0800

[CALCITE-5140] Spark, Piglet tests fail in GitHub CI with OpenJ9

Close apache/calcite#2802
---
 .github/workflows/main.yml | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml
index 60f20e59f5..ac12876aa8 100644
--- a/.github/workflows/main.yml
+++ b/.github/workflows/main.yml
@@ -142,7 +142,8 @@ jobs:
 with:
   job-id: jdk8-openj9
   remote-build-cache-proxy-enabled: false
-  arguments: --scan --no-parallel --no-daemon build javadoc
+  # Temporarily disable hadoop related tests due to 
https://github.com/eclipse-openj9/openj9/issues/14950
+  arguments: --scan --no-parallel --no-daemon -x :piglet:test -x 
:spark:test build javadoc
   - name: 'sqlline and sqllsh'
 run: |
   ./sqlline -e '!quit'



[hive] branch master updated: HIVE-26172: Upgrade ant to 1.10.12 (Hemanth Boyina, reviewed by Stamatis Zampetakis)

2022-05-11 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 1fc7a0c410 HIVE-26172: Upgrade ant to 1.10.12 (Hemanth Boyina, 
reviewed by Stamatis Zampetakis)
1fc7a0c410 is described below

commit 1fc7a0c410d9b8a93ebaa8a318f7664490dd7b1b
Author: hemanthboyina 
AuthorDate: Mon Apr 25 22:25:29 2022 +0530

HIVE-26172: Upgrade ant to 1.10.12 (Hemanth Boyina, reviewed by Stamatis 
Zampetakis)

Closes #3242
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 048d0d72c0..dd6ab89a46 100644
--- a/pom.xml
+++ b/pom.xml
@@ -96,7 +96,7 @@
 3.0.0-M4
 
 1.10.1
-1.10.9
+1.10.12
 3.5.2
 1.5.7
 



[calcite-avatica] branch main updated: [CALCITE-5095] Support Java 18 and Guava 31.1-jre

2022-04-29 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/calcite-avatica.git


The following commit(s) were added to refs/heads/main by this push:
 new 360c0e7c8 [CALCITE-5095] Support Java 18 and Guava 31.1-jre
360c0e7c8 is described below

commit 360c0e7c84d754b7fffb64c0e021dbb1833ee283
Author: Benchao Li 
AuthorDate: Wed Apr 20 08:36:40 2022 +0800

[CALCITE-5095] Support Java 18 and Guava 31.1-jre

Close apache/calcite-avatica#178
---
 .github/workflows/main.yml | 6 +++---
 .travis.yml| 6 +++---
 gradle.properties  | 2 +-
 site/_docs/history.md  | 4 ++--
 4 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml
index d4b4e9576..7f7b0e52e 100644
--- a/.github/workflows/main.yml
+++ b/.github/workflows/main.yml
@@ -79,16 +79,16 @@ jobs:
 ./gradlew --no-parallel --no-daemon build javadoc
 
   mac:
-name: 'macOS (JDK 14)'
+name: 'macOS (JDK 18)'
 runs-on: macos-latest
 steps:
 - uses: actions/checkout@v2
   with:
 fetch-depth: 50
-- name: 'Set up JDK 14'
+- name: 'Set up JDK 18'
   uses: actions/setup-java@v1
   with:
-java-version: 14
+java-version: 18
 - name: 'Test'
   run: |
 ./gradlew --no-parallel --no-daemon build javadoc
diff --git a/.travis.yml b/.travis.yml
index 9cac5a144..471b93374 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -16,7 +16,7 @@
 #
 
 # Configuration file for Travis continuous integration.
-# See https://travis-ci.org/apache/calcite-avatica
+# See https://travis-ci.com/github/apache/calcite-avatica
 language: java
 matrix:
   fast_finish: true
@@ -50,11 +50,11 @@ matrix:
 - install: true
   jdk: openjdk11
   env:
-- GUAVA=31.0.1-jre # newest supported Guava version
+- GUAVA=31.1-jre # newest supported Guava version
   script:
 - ./gradlew $GRADLE_ARGS -Pguava.version=${GUAVA:-14.0.1} build
 - install: true
-  jdk: openjdk15
+  jdk: openjdk18
   env:
   script:
 - ./gradlew $GRADLE_ARGS build
diff --git a/gradle.properties b/gradle.properties
index d1f780aef..b43302bd8 100644
--- a/gradle.properties
+++ b/gradle.properties
@@ -60,7 +60,7 @@ bouncycastle.version=1.60
 dropwizard-metrics.version=4.0.5
 # We support Guava versions as old as 14.0.1 (the version used by Hive)
 # but prefer more recent versions.
-guava.version=31.0.1-jre
+guava.version=31.1-jre
 hamcrest.version=1.3
 hsqldb.version=2.4.1
 h2.version=1.4.197
diff --git a/site/_docs/history.md b/site/_docs/history.md
index 77d5edf0b..c81abe2ca 100644
--- a/site/_docs/history.md
+++ b/site/_docs/history.md
@@ -36,9 +36,9 @@ Apache Calcite Avatica 1.21.0 is under development.
 
 Compatibility: This release is tested
 on Linux, macOS, Microsoft Windows;
-using Oracle JDK 8, 9, 10, 11, 12, 13, 14, 15, 16, 17;
+using Oracle JDK 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18;
 using IBM Java 8;
-Guava versions 14.0.1 to 31.0.1-jre;
+Guava versions 14.0.1 to 31.1-jre;
 other software versions as specified in `gradle.properties`.
 
 Contributors to this release:



[hive] branch master updated: HIVE-25758: OOM due to recursive application of CBO rules (Alessandro Solimando, reviewed by Stamatis Zampetakis)

2022-04-27 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 7583142cbff HIVE-25758: OOM due to recursive application of CBO rules 
(Alessandro Solimando, reviewed by Stamatis Zampetakis)
7583142cbff is described below

commit 7583142cbffcb3958a546a9aaa15700bbc243df9
Author: Alessandro Solimando 
AuthorDate: Mon Jan 24 13:08:56 2022 +0100

HIVE-25758: OOM due to recursive application of CBO rules (Alessandro 
Solimando, reviewed by Stamatis Zampetakis)

Closes #2966
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |  4 ++
 .../hive/ql/optimizer/calcite/HiveCalciteUtil.java | 52 ++
 .../HiveJoinPushTransitivePredicatesRule.java  | 82 +-
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  7 +-
 .../cbo_join_transitive_pred_loop_1.q  | 17 +
 .../cbo_join_transitive_pred_loop_2.q  | 24 +++
 .../cbo_join_transitive_pred_loop_3.q  | 23 ++
 .../cbo_join_transitive_pred_loop_4.q  | 23 ++
 .../llap/cbo_join_transitive_pred_loop_1.q.out | 75 
 .../llap/cbo_join_transitive_pred_loop_2.q.out | 74 +++
 .../llap/cbo_join_transitive_pred_loop_3.q.out | 67 ++
 .../llap/cbo_join_transitive_pred_loop_4.q.out | 73 +++
 12 files changed, 470 insertions(+), 51 deletions(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 99964fc7732..caf223dd91b 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -2530,6 +2530,10 @@ public class HiveConf extends Configuration {
 "If this config is true only pushed down filters remain in the 
operator tree, \n" +
 "and the original filter is removed. If this config is false, the 
original filter \n" +
 "is also left in the operator tree at the original place."),
+
HIVE_JOIN_DISJ_TRANSITIVE_PREDICATES_PUSHDOWN("hive.optimize.join.disjunctive.transitive.predicates.pushdown",
+true, "Whether to transitively infer disjunctive predicates across 
joins. \n"
++ "Disjunctive predicates are hard to simplify and pushing them down 
might lead to infinite rule matching "
++ "causing stackoverflow and OOM errors"),
 HIVEPOINTLOOKUPOPTIMIZER("hive.optimize.point.lookup", true,
  "Whether to transform OR clauses in Filter operators into IN 
clauses"),
 HIVEPOINTLOOKUPOPTIMIZERMIN("hive.optimize.point.lookup.min", 2,
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java
index 160bfb86f6c..264756f0413 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java
@@ -1214,6 +1214,58 @@ public class HiveCalciteUtil {
 }
   }
 
+  private static class DisjunctivePredicatesFinder extends 
RexVisitorImpl {
+// accounting for DeMorgan's law
+boolean inNegation = false;
+boolean hasDisjunction = false;
+
+public DisjunctivePredicatesFinder() {
+  super(true);
+}
+
+@Override
+public Void visitCall(RexCall call) {
+  switch (call.getKind()) {
+  case OR:
+if (inNegation) {
+  return super.visitCall(call);
+} else {
+  this.hasDisjunction = true;
+  return null;
+}
+  case AND:
+if (inNegation) {
+  this.hasDisjunction = true;
+  return null;
+} else {
+  return super.visitCall(call);
+}
+  case NOT:
+inNegation = !inNegation;
+return super.visitCall(call);
+  default:
+return super.visitCall(call);
+  }
+}
+  }
+
+  /**
+   * Returns whether the expression has disjunctions (OR) at any level of 
nesting.
+   * 
+   *  Example 1: OR(=($0, $1), IS NOT NULL($2))):INTEGER (OR in the 
top-level expression) 
+   *  Example 2: NOT(AND(=($0, $1), IS NOT NULL($2)) 
+   *   this is equivalent to OR((($0, $1), IS NULL($2))
+   *  Example 3: AND(OR(=($0, $1), IS NOT NULL($2 (OR in inner 
expression) 
+   * 
+   * @param node the expression where to look for disjunctions.
+   * @return true if the given expressions contains a disjunction, false 
otherwise.
+   */
+  public static boolean hasDisjuction(RexNode node) {
+DisjunctivePredicatesFinder finder = new DisjunctivePredicatesFinder();
+node.accept(finder);
+return finder.hasDisjunction;
+  }
+
   /**
* Checks if any of the ex

[hive] branch master updated: HIVE-26127: INSERT OVERWRITE throws FileNotFound when destination partition is deleted (Yu-Wen Lai, reviewed by Stamatis Zampetakis)

2022-04-14 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 260924050b HIVE-26127: INSERT OVERWRITE throws FileNotFound when 
destination partition is deleted (Yu-Wen Lai, reviewed by Stamatis Zampetakis)
260924050b is described below

commit 260924050b11d3342b44091797d88b6f489dcaef
Author: Yu-Wen Lai 
AuthorDate: Fri Apr 8 17:56:32 2022 -0700

HIVE-26127: INSERT OVERWRITE throws FileNotFound when destination partition 
is deleted (Yu-Wen Lai, reviewed by Stamatis Zampetakis)

Closes #3198
---
 ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java  |  2 +-
 ql/src/test/queries/clientpositive/insert_overwrite.q |  4 
 .../test/results/clientpositive/llap/insert_overwrite.q.out   | 11 +++
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 
b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
index 4ed822aa7b..ac21ecd95a 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
@@ -5377,7 +5377,7 @@ private void constructOneLBLocationMap(FileStatus fSta,
   // But not sure why we changed not to delete the oldPath in HIVE-8750 if 
it is
   // not the destf or its subdir?
   isOldPathUnderDestf = isSubDir(oldPath, destPath, oldFs, destFs, false);
-  if (isOldPathUnderDestf) {
+  if (isOldPathUnderDestf && oldFs.exists(oldPath)) {
 cleanUpOneDirectoryForReplace(oldPath, oldFs, pathFilter, conf, purge, 
isNeedRecycle);
   }
 } catch (IOException e) {
diff --git a/ql/src/test/queries/clientpositive/insert_overwrite.q 
b/ql/src/test/queries/clientpositive/insert_overwrite.q
index 357227e4af..43f0bb29af 100644
--- a/ql/src/test/queries/clientpositive/insert_overwrite.q
+++ b/ql/src/test/queries/clientpositive/insert_overwrite.q
@@ -77,6 +77,10 @@ SELECT count(*) FROM ext_part;
 
 SELECT * FROM ext_part ORDER BY par, col;
 
+-- removing a partition manually should not fail the next insert overwrite 
operation
+dfs -rm -r ${hiveconf:hive.metastore.warehouse.dir}/ext_part/par=1;
+INSERT OVERWRITE TABLE ext_part PARTITION (par) SELECT * FROM b;
+
 drop table ext_part;
 drop table b;
 
diff --git a/ql/src/test/results/clientpositive/llap/insert_overwrite.q.out 
b/ql/src/test/results/clientpositive/llap/insert_overwrite.q.out
index 626a8e2c9f..f92c6c6254 100644
--- a/ql/src/test/results/clientpositive/llap/insert_overwrite.q.out
+++ b/ql/src/test/results/clientpositive/llap/insert_overwrite.q.out
@@ -308,6 +308,17 @@ POSTHOOK: Input: default@ext_part@par=2
 third  1
 first  2
 second 2
+ A masked pattern was here 
+PREHOOK: query: INSERT OVERWRITE TABLE ext_part PARTITION (par) SELECT * FROM b
+PREHOOK: type: QUERY
+PREHOOK: Input: default@b
+PREHOOK: Output: default@ext_part
+POSTHOOK: query: INSERT OVERWRITE TABLE ext_part PARTITION (par) SELECT * FROM 
b
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@b
+POSTHOOK: Output: default@ext_part
+POSTHOOK: Output: default@ext_part@par=1
+POSTHOOK: Lineage: ext_part PARTITION(par=1).col SIMPLE 
[(b)b.FieldSchema(name:par, type:string, comment:null), ]
 PREHOOK: query: drop table ext_part
 PREHOOK: type: DROPTABLE
 PREHOOK: Input: default@ext_part



[hive] branch master updated: HIVE-26139: Encode only '#' characters in HBaseStorageHandler authorization URL (Steve Carlin, reviewed by Alessandro Solimando, Stamatis Zampetakis)

2022-04-13 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new f94a6efbf5 HIVE-26139: Encode only '#' characters in 
HBaseStorageHandler authorization URL (Steve Carlin, reviewed by Alessandro 
Solimando, Stamatis Zampetakis)
f94a6efbf5 is described below

commit f94a6efbf5a37d82e6160ff816c86f7240f37a70
Author: Steve Carlin 
AuthorDate: Tue Apr 12 11:32:25 2022 -0700

HIVE-26139: Encode only '#' characters in HBaseStorageHandler authorization 
URL (Steve Carlin, reviewed by Alessandro Solimando, Stamatis Zampetakis)

Remove the global encoding of the authorization URL since it has some
undesirable side effects.

Closes #3206
---
 .../org/apache/hadoop/hive/hbase/HBaseStorageHandler.java | 11 +++
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git 
a/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
b/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
index 03d455f095..b5cecccd49 100644
--- 
a/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
+++ 
b/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
@@ -19,10 +19,8 @@
 package org.apache.hadoop.hive.hbase;
 
 import java.io.IOException;
-import java.io.UnsupportedEncodingException;
 import java.net.URI;
 import java.net.URISyntaxException;
-import java.net.URLEncoder;
 import java.util.ArrayList;
 import java.util.LinkedHashSet;
 import java.util.List;
@@ -310,12 +308,9 @@ public class HBaseStorageHandler extends 
DefaultStorageHandler
 return new URI(URIString);
   }
 
-  private static String encodeString(String rawString) throws 
URISyntaxException {
-try {
-  return rawString != null ? URLEncoder.encode(rawString, "UTF-8"): null;
-} catch (UnsupportedEncodingException e) {
-  throw new URISyntaxException(rawString, "Could not URLEncode string");
-}
+  private static String encodeString(String rawString) {
+// Only url encode hash code value for now
+return rawString != null ? rawString.replace("#", "%23") : null;
   }
 
   /**



[calcite] branch master updated: [CALCITE-4936] Generalize FilterCalcMergeRule/ProjectCalcMergeRule to accept any Filter/Project/Calc operator

2022-04-11 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/master by this push:
 new ef2cc1df2 [CALCITE-4936] Generalize 
FilterCalcMergeRule/ProjectCalcMergeRule to accept any Filter/Project/Calc 
operator
ef2cc1df2 is described below

commit ef2cc1df21a73ad0268ccb869c976b11eff319b4
Author: maksim 
AuthorDate: Tue Dec 14 16:21:33 2021 +0300

[CALCITE-4936] Generalize FilterCalcMergeRule/ProjectCalcMergeRule to 
accept any Filter/Project/Calc operator

Close apache/calcite#2646
---
 .../apache/calcite/rel/rules/CalcMergeRule.java| 10 +-
 .../calcite/rel/rules/FilterCalcMergeRule.java | 15 +++---
 .../calcite/rel/rules/ProjectCalcMergeRule.java| 20 ---
 site/_docs/history.md  | 23 ++
 4 files changed, 43 insertions(+), 25 deletions(-)

diff --git a/core/src/main/java/org/apache/calcite/rel/rules/CalcMergeRule.java 
b/core/src/main/java/org/apache/calcite/rel/rules/CalcMergeRule.java
index bac42791b..3776873e3 100644
--- a/core/src/main/java/org/apache/calcite/rel/rules/CalcMergeRule.java
+++ b/core/src/main/java/org/apache/calcite/rel/rules/CalcMergeRule.java
@@ -28,13 +28,13 @@ import org.immutables.value.Value;
 
 /**
  * Planner rule that merges a
- * {@link org.apache.calcite.rel.logical.LogicalCalc} onto a
- * {@link org.apache.calcite.rel.logical.LogicalCalc}.
+ * {@link org.apache.calcite.rel.core.Calc} onto a
+ * {@link org.apache.calcite.rel.core.Calc}.
  *
- * The resulting {@link org.apache.calcite.rel.logical.LogicalCalc} has the
+ * The resulting {@link org.apache.calcite.rel.core.Calc} has the
  * same project list as the upper
- * {@link org.apache.calcite.rel.logical.LogicalCalc}, but expressed in terms 
of
- * the lower {@link org.apache.calcite.rel.logical.LogicalCalc}'s inputs.
+ * {@link org.apache.calcite.rel.core.Calc}, but expressed in terms of
+ * the lower {@link org.apache.calcite.rel.core.Calc}'s inputs.
  *
  * @see CoreRules#CALC_MERGE
  */
diff --git 
a/core/src/main/java/org/apache/calcite/rel/rules/FilterCalcMergeRule.java 
b/core/src/main/java/org/apache/calcite/rel/rules/FilterCalcMergeRule.java
index 7a9db73f7..d932e3e7b 100644
--- a/core/src/main/java/org/apache/calcite/rel/rules/FilterCalcMergeRule.java
+++ b/core/src/main/java/org/apache/calcite/rel/rules/FilterCalcMergeRule.java
@@ -21,7 +21,6 @@ import org.apache.calcite.plan.RelRule;
 import org.apache.calcite.rel.core.Calc;
 import org.apache.calcite.rel.core.Filter;
 import org.apache.calcite.rel.logical.LogicalCalc;
-import org.apache.calcite.rel.logical.LogicalFilter;
 import org.apache.calcite.rex.RexBuilder;
 import org.apache.calcite.rex.RexProgram;
 import org.apache.calcite.rex.RexProgramBuilder;
@@ -31,9 +30,9 @@ import org.immutables.value.Value;
 
 /**
  * Planner rule that merges a
- * {@link org.apache.calcite.rel.logical.LogicalFilter} and a
- * {@link org.apache.calcite.rel.logical.LogicalCalc}. The
- * result is a {@link org.apache.calcite.rel.logical.LogicalCalc}
+ * {@link org.apache.calcite.rel.core.Filter} and a
+ * {@link org.apache.calcite.rel.core.Calc}. The
+ * result is a {@link org.apache.calcite.rel.core.Calc}
  * whose filter condition is the logical AND of the two.
  *
  * @see FilterMergeRule
@@ -60,8 +59,8 @@ public class FilterCalcMergeRule
   //~ Methods 
 
   @Override public void onMatch(RelOptRuleCall call) {
-final LogicalFilter filter = call.rel(0);
-final LogicalCalc calc = call.rel(1);
+final Filter filter = call.rel(0);
+final Calc calc = call.rel(1);
 
 // Don't merge a filter onto a calc which contains windowed aggregates.
 // That would effectively be pushing a multiset down through a filter.
@@ -87,8 +86,8 @@ public class FilterCalcMergeRule
 topProgram,
 bottomProgram,
 rexBuilder);
-final LogicalCalc newCalc =
-LogicalCalc.create(calc.getInput(), mergedProgram);
+final Calc newCalc =
+calc.copy(calc.getTraitSet(), calc.getInput(), mergedProgram);
 call.transformTo(newCalc);
   }
 
diff --git 
a/core/src/main/java/org/apache/calcite/rel/rules/ProjectCalcMergeRule.java 
b/core/src/main/java/org/apache/calcite/rel/rules/ProjectCalcMergeRule.java
index 31106738e..e0b1640d5 100644
--- a/core/src/main/java/org/apache/calcite/rel/rules/ProjectCalcMergeRule.java
+++ b/core/src/main/java/org/apache/calcite/rel/rules/ProjectCalcMergeRule.java
@@ -35,13 +35,13 @@ import org.immutables.value.Value;
 
 /**
  * Planner rule that merges a
- * {@link org.apache.calcite.rel.logical.LogicalProject} and a
- * {@link org.apache.calcite.rel.logical.LogicalCalc}.
+ * {@link org.apache.calcite.rel.core.Project} and a
+ * {@link org.apache.calcite.rel.core.Calc

[hive] branch master updated (2f3dd9a5b1 -> e8f3a6cdc2)

2022-04-07 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


from 2f3dd9a5b1 HIVE-26118: [Standalone beeline] Jar name mismatch in 
assembly (Naveen Gangam, reviewed by Zhihua Deng) (#3180)
 add 71b62c68ef HIVE-26119: Remove unnecessary Exceptions from DDLPlanUtils 
(Soumyakanti Das, reviewed by Stamatis Zampetakis)
 add 73cbab65ea HIVE-26019: Upgrade com.jayway.jsonpath from 2.4.0 to 2.7.0 
(Stamatis Zampetakis, reviewed by Alessandro Solimando, Krisztian Kasa)
 add e8f3a6cdc2 HIVE-26020: Set dependency scope for json-path, 
commons-compiler and janino to runtime (Stamatis Zampetakis, reviewed by 
Alessandro Solimando, Krisztian Kasa)

No new revisions were added by this update.

Summary of changes:
 pom.xml  | 5 -
 ql/src/java/org/apache/hadoop/hive/ql/exec/DDLPlanUtils.java | 8 +++-
 ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java  | 5 ++---
 3 files changed, 9 insertions(+), 9 deletions(-)



[hive] 02/02: HIVE-26067: Remove unused core directory and duplicate DerbyPolicy class (Stamatis Zampetakis, reviewed by Peter Vary)

2022-03-31 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit d22864ff734699c65404c80d7b67b102dbe3e873
Author: Stamatis Zampetakis 
AuthorDate: Thu Mar 24 11:01:31 2022 +0100

HIVE-26067: Remove unused core directory and duplicate DerbyPolicy class 
(Stamatis Zampetakis, reviewed by Peter Vary)

There is another identical copy of DerbyPolicy.java inside the hcatalog
module.

Closes #3135
---
 .../java/org/apache/hive/hcatalog/DerbyPolicy.java | 90 --
 1 file changed, 90 deletions(-)

diff --git a/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java 
b/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java
deleted file mode 100644
index cecf6dc..000
--- a/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java
+++ /dev/null
@@ -1,90 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hive.hcatalog;
-
-import org.apache.derby.security.SystemPermission;
-
-import java.security.CodeSource;
-import java.security.Permission;
-import java.security.PermissionCollection;
-import java.security.Policy;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.Enumeration;
-import java.util.Iterator;
-
-/**
- * A security policy that grants usederbyinternals
- *
- * 
- *   HCatalog tests use Security Manager to handle exits.  With Derby version 
10.14.1, if a
- *   security manager is configured, embedded Derby requires usederbyinternals 
permission, and
- *   that is checked directly using AccessController.checkPermission.  This 
class will be used to
- *   setup a security policy to grant usederbyinternals, in tests that use 
NoExitSecurityManager.
- * 
- */
-public class DerbyPolicy extends Policy {
-
-  private static PermissionCollection perms;
-
-  public DerbyPolicy() {
-super();
-if (perms == null) {
-  perms = new DerbyPermissionCollection();
-  addPermissions();
-}
-  }
-
-  @Override
-  public PermissionCollection getPermissions(CodeSource codesource) {
-return perms;
-  }
-
-  private void addPermissions() {
-SystemPermission systemPermission = new SystemPermission("engine", 
"usederbyinternals");
-perms.add(systemPermission);
-  }
-
-  class DerbyPermissionCollection extends PermissionCollection {
-
-ArrayList perms = new ArrayList();
-
-public void add(Permission p) {
-  perms.add(p);
-}
-
-public boolean implies(Permission p) {
-  for (Iterator i = perms.iterator(); i.hasNext();) {
-if (((Permission) i.next()).implies(p)) {
-  return true;
-}
-  }
-  return false;
-}
-
-public Enumeration elements() {
-  return Collections.enumeration(perms);
-}
-
-public boolean isReadOnly() {
-  return false;
-}
-  }
-}
-


[hive] branch master updated (4d49169 -> d22864f)

2022-03-31 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 4d49169  HIVE-26080: Upgrade accumulo-core to 1.10.1 (Ashish Sharma, 
reviewed by Adesh Rao)
 new 2ac9b5f  HIVE-26068: Add README with build instructions to the src 
tarball (Stamatis Zampetakis, reviewed by Peter Vary)
 new d22864f  HIVE-26067: Remove unused core directory and duplicate 
DerbyPolicy class (Stamatis Zampetakis, reviewed by Peter Vary)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 README.md  |  3 +
 .../java/org/apache/hive/hcatalog/DerbyPolicy.java | 90 --
 packaging/src/main/assembly/src.xml|  2 +-
 3 files changed, 4 insertions(+), 91 deletions(-)
 delete mode 100644 core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java


[hive] 01/02: HIVE-26068: Add README with build instructions to the src tarball (Stamatis Zampetakis, reviewed by Peter Vary)

2022-03-31 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 2ac9b5f09000cdf041043659ba5623a4bd653a85
Author: Stamatis Zampetakis 
AuthorDate: Thu Mar 24 11:38:02 2022 +0100

HIVE-26068: Add README with build instructions to the src tarball (Stamatis 
Zampetakis, reviewed by Peter Vary)

Closes #3136
---
 README.md   | 3 +++
 packaging/src/main/assembly/src.xml | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/README.md b/README.md
index fe5c456..53a2115 100644
--- a/README.md
+++ b/README.md
@@ -65,6 +65,9 @@ Getting Started
 - Installation Instructions and a quick tutorial:
   https://cwiki.apache.org/confluence/display/Hive/GettingStarted
 
+- Instructions to build Hive from source:
+  
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-BuildingHivefromSource
+
 - A longer tutorial that covers more features of HiveQL:
   https://cwiki.apache.org/confluence/display/Hive/Tutorial
 
diff --git a/packaging/src/main/assembly/src.xml 
b/packaging/src/main/assembly/src.xml
index d300816..9acb6b5 100644
--- a/packaging/src/main/assembly/src.xml
+++ b/packaging/src/main/assembly/src.xml
@@ -56,7 +56,7 @@
 .gitignore
 .reviewboardrc
 DEVNOTES
-README.txt
+README*
 LICENSE
 NOTICE
 CHANGELOG


[hive] branch master updated: HIVE-26015: CREATE HBase table fails when SERDEPROPERTIES contain special characters (Steve Carlin, reviewed by Alessandro Solimando, Stamatis Zampetakis)

2022-03-22 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 2bdec6f  HIVE-26015: CREATE HBase table fails when SERDEPROPERTIES 
contain special characters (Steve Carlin, reviewed by Alessandro Solimando, 
Stamatis Zampetakis)
2bdec6f is described below

commit 2bdec6f63d217cb42720fd7fe5cee804a2e5803c
Author: Steve Carlin 
AuthorDate: Tue Mar 8 12:29:04 2022 -0800

HIVE-26015: CREATE HBase table fails when SERDEPROPERTIES contain special 
characters (Steve Carlin, reviewed by Alessandro Solimando, Stamatis Zampetakis)

Fields in the Serde Properties can have a hash tag (#) in it, so the
URI needs to be URLEncoded.

Closes #3084
---
 .../hadoop/hive/hbase/HBaseStorageHandler.java | 31 ++---
 .../hadoop/hive/hbase/TestHBaseStorageHandler.java | 76 ++
 2 files changed, 99 insertions(+), 8 deletions(-)

diff --git 
a/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
b/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
index 302c09c..03d455f 100644
--- 
a/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
+++ 
b/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
@@ -19,8 +19,10 @@
 package org.apache.hadoop.hive.hbase;
 
 import java.io.IOException;
+import java.io.UnsupportedEncodingException;
 import java.net.URI;
 import java.net.URISyntaxException;
+import java.net.URLEncoder;
 import java.util.ArrayList;
 import java.util.LinkedHashSet;
 import java.util.List;
@@ -293,14 +295,27 @@ public class HBaseStorageHandler extends 
DefaultStorageHandler
   public URI getURIForAuth(Table table) throws URISyntaxException {
 Map tableProperties = 
HiveCustomStorageHandlerUtils.getTableProperties(table);
 hbaseConf = getConf();
-String hbase_host = tableProperties.containsKey(HBASE_HOST_NAME)? 
tableProperties.get(HBASE_HOST_NAME) : hbaseConf.get(HBASE_HOST_NAME);
-String hbase_port = tableProperties.containsKey(HBASE_CLIENT_PORT)? 
tableProperties.get(HBASE_CLIENT_PORT) : hbaseConf.get(HBASE_CLIENT_PORT);
-String table_name = 
tableProperties.getOrDefault(HBaseSerDe.HBASE_TABLE_NAME, null);
-String column_family = 
tableProperties.getOrDefault(HBaseSerDe.HBASE_COLUMNS_MAPPING, null);
-if (column_family != null)
-  return new 
URI(HBASE_PREFIX+"//"+hbase_host+":"+hbase_port+"/"+table_name+"/"+column_family);
-else
-  return new 
URI(HBASE_PREFIX+"//"+hbase_host+":"+hbase_port+"/"+table_name);
+String hbase_host = tableProperties.getOrDefault(HBASE_HOST_NAME,
+hbaseConf.get(HBASE_HOST_NAME));
+String hbase_port = tableProperties.getOrDefault(HBASE_CLIENT_PORT,
+hbaseConf.get(HBASE_CLIENT_PORT));
+String table_name = 
encodeString(tableProperties.getOrDefault(HBaseSerDe.HBASE_TABLE_NAME,
+null));
+String column_family = encodeString(tableProperties.getOrDefault(
+HBaseSerDe.HBASE_COLUMNS_MAPPING, null));
+String URIString = HBASE_PREFIX + "//" + hbase_host + ":" + hbase_port + 
"/" + table_name;
+if (column_family != null) {
+  URIString += "/" + column_family;
+}
+return new URI(URIString);
+  }
+
+  private static String encodeString(String rawString) throws 
URISyntaxException {
+try {
+  return rawString != null ? URLEncoder.encode(rawString, "UTF-8"): null;
+} catch (UnsupportedEncodingException e) {
+  throw new URISyntaxException(rawString, "Could not URLEncode string");
+}
   }
 
   /**
diff --git 
a/hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseStorageHandler.java
 
b/hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseStorageHandler.java
index 8c8702a..b12df94 100644
--- 
a/hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseStorageHandler.java
+++ 
b/hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseStorageHandler.java
@@ -17,9 +17,16 @@
  */
 package org.apache.hadoop.hive.hbase;
 
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.HashMap;
+import java.util.Map;
 import java.util.Properties;
 
 import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.SerDeInfo;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.metastore.api.Table;
 import org.apache.hadoop.hive.ql.plan.TableDesc;
 import org.apache.hadoop.mapred.JobConf;
 import org.junit.Assert;
@@ -46,6 +53,58 @@ public class TestHBaseStorageHandler {
 jobConfToConfigure.get("hbase.some.fake.option.from.xml.file") != 
null);
   }
 
+  @Test
+  public void testGetUriForAuthEmptyT

[hive] branch master updated: HIVE-26022: Error: ORA-00904 when initializing metastore schema in Oracle (Stamatis Zampetakis, reviewed by Laszlo Bodor)

2022-03-10 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new d696b34  HIVE-26022: Error: ORA-00904 when initializing metastore 
schema in Oracle (Stamatis Zampetakis, reviewed by Laszlo Bodor)
d696b34 is described below

commit d696b34a5765fe950ebe4bfffd36b9ea914dfaab
Author: Stamatis Zampetakis 
AuthorDate: Wed Mar 9 17:08:38 2022 +0100

HIVE-26022: Error: ORA-00904 when initializing metastore schema in Oracle 
(Stamatis Zampetakis, reviewed by Laszlo Bodor)

Closes #3088
---
 Jenkinsfile | 2 +-
 .../metastore-server/src/main/sql/oracle/hive-schema-4.0.0.oracle.sql   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 70d5b91..8d16e60 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -257,7 +257,7 @@ fi
   }
 
   def branches = [:]
-  for (def d in ['derby','postgres','mysql']) {
+  for (def d in ['derby','postgres','mysql','oracle']) {
 def dbType=d
 def splitName = "init@$dbType"
 branches[splitName] = {
diff --git 
a/standalone-metastore/metastore-server/src/main/sql/oracle/hive-schema-4.0.0.oracle.sql
 
b/standalone-metastore/metastore-server/src/main/sql/oracle/hive-schema-4.0.0.oracle.sql
index 166af8a..726fd34 100644
--- 
a/standalone-metastore/metastore-server/src/main/sql/oracle/hive-schema-4.0.0.oracle.sql
+++ 
b/standalone-metastore/metastore-server/src/main/sql/oracle/hive-schema-4.0.0.oracle.sql
@@ -1281,7 +1281,7 @@ CREATE TABLE "REPLICATION_METRICS" (
   "RM_METADATA" varchar2(4000),
   "RM_PROGRESS" varchar2(4000),
   "RM_START_TIME" integer NOT NULL,
-  "MESSAGE_FORMAT" VARCHAR(16) DEFAULT 'json-0.2',
+  "MESSAGE_FORMAT" VARCHAR(16) DEFAULT 'json-0.2'
 );
 
 --Create indexes for the replication metrics table


[calcite] branch site updated (c3dbf52 -> dcbc493)

2022-03-10 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch site
in repository https://gitbox.apache.org/repos/asf/calcite.git.


omit c3dbf52  [CALCITE-5019] Avoid multiple scans when table is 
ProjectableFilterableTable and projections and filters act on different columns
omit b125ab9  Following [CALCITE-1794], update DruidDateTimeUtils and plans 
in Druid adapter
omit 2621486  [CALCITE-5030] Upgrade jsonpath version from 2.4.0 to 2.7.0
omit 7a117c7  Site: Reorganise website update process in README & howto
omit 8b62479  [CALCITE-1794] Expressions with numeric comparisons are not 
simplified when CAST is present
omit c9ea3e6  [CALCITE-5025] Upgrade commons-io version from 2.4 to 2.11.0
omit 316e575  [CALCITE-5008] Ignore synthetic and static methods in 
MetadataDef
omit 296fc3e  [CALCITE-3673] ListTransientTable should not leave tables in 
the schema [CALCITE-4054] RepeatUnion containing a Correlate with a 
transientScan on its RHS causes NPE
omit 4fb1a42  [CALCITE-5011] CassandraAdapterDataTypesTest fails with 
initialization error
omit 2376a3a  [CALCITE-4912] Confusing javadoc of RexSimplify.simplify
omit 1f660d5  [CALCITE-4683] IN-list converted to JOIN throws type mismatch 
exception
omit c310f7c  [CALCITE-4323] If a view definition has an ORDER BY clause, 
retain the sort if the view is used in a query at top level
omit 43ed633  Fix typo in filterable-model.yaml
omit 4663f33  [CALCITE-4995] AssertionError caused by RelFieldTrimmer on 
SEMI/ANTI join
omit 4c73e85  [CALCITE-4988] ((A IS NOT NULL OR B) AND A IS NOT NULL) can't 
be simplify to (A IS NOT NULL) When A is deterministic
omit 07edf27  [CALCITE-5007] Upgrade H2 database version to 2.1.210
omit dc3e7d3  [CALCITE-5006] Gradle tasks for launching JDBC integration 
tests are not working
omit 9cecf84  [CALCITE-4997] Keep APPROX_COUNT_DISTINCT in some SqlDialects
omit b4a5768  [CALCITE-4996] In RelJson, add a readExpression method that 
can convert JSON to a RexNode expression
omit 67ba007  [CALCITE-4872] Add UNKNOWN value to enum SqlTypeName, 
distinct from the NULL type
omit 466fb42  [CALCITE-4702] Error when executing query with GROUP BY 
constant via JDBC adapter
omit bb89b92  [CALCITE-4994] SQL-to-RelNode conversion is slow if table 
contains hundreds of fields
omit 28f4195  [CALCITE-4980] Babel parser support MySQL NULL-safe equal 
operator '<=>' (xurenhe&)
omit 909e134  [CALCITE-4885] Fluent test fixtures so that dependent 
projects can write parser, validator and rules tests
omit 21b8852  [CALCITE-4991] Improve RuleEventLogger to also print input 
rels in FULL_PLAN mode
omit 48aa946  [CALCITE-4986] Make HepProgram thread-safe
omit 83c2911  Corrected json stream property name
omit 9d02d45  [CALCITE-4967] Support SQL hints for temporal table join
omit 4bc5cf1  [CALCITE-4965] IS NOT NULL failed in Elasticsearch Adapter
omit 8cd2414  [CALCITE-4977] Support Snapshot in RelMdColumnOrigins
omit 66835ab  [CALCITE-4901] JDBC adapter incorrectly adds ORDER BY columns 
to the SELECT list of generated SQL query
omit 3579eec  [CALCITE-4953] Deprecate TableAccessMap class
omit 628d7ac  Update javacc official website
omit 6d05e10  [CALCITE-3627] Incorrect null semantic for ROW function
omit 770561e  [CALCITE-4973] Upgrade log4j2 version to 2.17.1
omit 1ce4f05  [CALCITE-4960] Enable unit tests in Elasticsearch Adapter
omit d3d4821  Remove unused package-private RelNullShuttle class
omit 8916c41  [CALCITE-4963] Make it easier to implement interface 
SqlDialectFactory
omit 82be1ec  [CALCITE-4968] Use TOP N for MsSQL instead of FETCH without 
OFFSET
omit ffdb109  [CALCITE-4952] Introduce a simplistic RelMetadataQuery option
omit 4c7de7e  Site: Use openjdk-17 to generate javadoc with docker
omit d132471  Site: update PMC Chair
omit fabef05  Site: Add external resources section in the community page
omit 48f4bf8  Site: Add "calcite-clj - Use Calcite with Clojure" in talks 
section
omit d088cde  Site: Add Alessandro Solimando as committer
omit 664c4d3  Site: Fix typo in howto.md
omit dd34953  Site: Change the javadoc title to Apache Calcite API
omit d29aa09  Site: For tables that display results, center the content 
horizontally
omit da4fc3b  Site: Add syntax highlighting to SQL statements
omit cee9f67  Site: Improve HTML tables display & update CSV tutorial
 add fc87380  Site: Use openjdk-17 to generate javadoc with docker
 add df35274  [CALCITE-4952] Introduce a simplistic RelMetadataQuery option
 add 184c57a  Site: Change the javadoc title to Apache Calcite API
 add cc40a48  [CALCITE-4968] Use TOP N for MsSQL instead of FETCH without 
OFFSET
 add 8983e7e  [CALCITE-4963] Make it easier to implement interface 
SqlDialectFactory
 add b2baf2d  Remove unused package-

[calcite] branch master updated (f14cf4c -> dcbc493)

2022-03-10 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/calcite.git.


omit f14cf4c  [CALCITE-5031] Release Calcite 1.30.0
omit c3dbf52  [CALCITE-5019] Avoid multiple scans when table is 
ProjectableFilterableTable and projections and filters act on different columns
omit b125ab9  Following [CALCITE-1794], update DruidDateTimeUtils and plans 
in Druid adapter
omit 2621486  [CALCITE-5030] Upgrade jsonpath version from 2.4.0 to 2.7.0
omit 7a117c7  Site: Reorganise website update process in README & howto
omit 8b62479  [CALCITE-1794] Expressions with numeric comparisons are not 
simplified when CAST is present
omit c9ea3e6  [CALCITE-5025] Upgrade commons-io version from 2.4 to 2.11.0
omit 316e575  [CALCITE-5008] Ignore synthetic and static methods in 
MetadataDef
omit 296fc3e  [CALCITE-3673] ListTransientTable should not leave tables in 
the schema [CALCITE-4054] RepeatUnion containing a Correlate with a 
transientScan on its RHS causes NPE
omit 4fb1a42  [CALCITE-5011] CassandraAdapterDataTypesTest fails with 
initialization error
omit 2376a3a  [CALCITE-4912] Confusing javadoc of RexSimplify.simplify
omit 1f660d5  [CALCITE-4683] IN-list converted to JOIN throws type mismatch 
exception
omit c310f7c  [CALCITE-4323] If a view definition has an ORDER BY clause, 
retain the sort if the view is used in a query at top level
omit 43ed633  Fix typo in filterable-model.yaml
omit 4663f33  [CALCITE-4995] AssertionError caused by RelFieldTrimmer on 
SEMI/ANTI join
omit 4c73e85  [CALCITE-4988] ((A IS NOT NULL OR B) AND A IS NOT NULL) can't 
be simplify to (A IS NOT NULL) When A is deterministic
omit 07edf27  [CALCITE-5007] Upgrade H2 database version to 2.1.210
omit dc3e7d3  [CALCITE-5006] Gradle tasks for launching JDBC integration 
tests are not working
omit 9cecf84  [CALCITE-4997] Keep APPROX_COUNT_DISTINCT in some SqlDialects
omit b4a5768  [CALCITE-4996] In RelJson, add a readExpression method that 
can convert JSON to a RexNode expression
omit 67ba007  [CALCITE-4872] Add UNKNOWN value to enum SqlTypeName, 
distinct from the NULL type
omit 466fb42  [CALCITE-4702] Error when executing query with GROUP BY 
constant via JDBC adapter
omit bb89b92  [CALCITE-4994] SQL-to-RelNode conversion is slow if table 
contains hundreds of fields
omit 28f4195  [CALCITE-4980] Babel parser support MySQL NULL-safe equal 
operator '<=>' (xurenhe&)
omit 909e134  [CALCITE-4885] Fluent test fixtures so that dependent 
projects can write parser, validator and rules tests
omit 21b8852  [CALCITE-4991] Improve RuleEventLogger to also print input 
rels in FULL_PLAN mode
omit 48aa946  [CALCITE-4986] Make HepProgram thread-safe
omit 83c2911  Corrected json stream property name
omit 9d02d45  [CALCITE-4967] Support SQL hints for temporal table join
omit 4bc5cf1  [CALCITE-4965] IS NOT NULL failed in Elasticsearch Adapter
omit 8cd2414  [CALCITE-4977] Support Snapshot in RelMdColumnOrigins
omit 66835ab  [CALCITE-4901] JDBC adapter incorrectly adds ORDER BY columns 
to the SELECT list of generated SQL query
omit 3579eec  [CALCITE-4953] Deprecate TableAccessMap class
omit 628d7ac  Update javacc official website
omit 6d05e10  [CALCITE-3627] Incorrect null semantic for ROW function
omit 770561e  [CALCITE-4973] Upgrade log4j2 version to 2.17.1
omit 1ce4f05  [CALCITE-4960] Enable unit tests in Elasticsearch Adapter
omit d3d4821  Remove unused package-private RelNullShuttle class
omit 8916c41  [CALCITE-4963] Make it easier to implement interface 
SqlDialectFactory
omit 82be1ec  [CALCITE-4968] Use TOP N for MsSQL instead of FETCH without 
OFFSET
omit ffdb109  [CALCITE-4952] Introduce a simplistic RelMetadataQuery option
omit 4c7de7e  Site: Use openjdk-17 to generate javadoc with docker
omit d132471  Site: update PMC Chair
omit fabef05  Site: Add external resources section in the community page
omit 48f4bf8  Site: Add "calcite-clj - Use Calcite with Clojure" in talks 
section
omit d088cde  Site: Add Alessandro Solimando as committer
omit 664c4d3  Site: Fix typo in howto.md
omit dd34953  Site: Change the javadoc title to Apache Calcite API
omit d29aa09  Site: For tables that display results, center the content 
horizontally
omit da4fc3b  Site: Add syntax highlighting to SQL statements
omit cee9f67  Site: Improve HTML tables display & update CSV tutorial
 add fc87380  Site: Use openjdk-17 to generate javadoc with docker
 add df35274  [CALCITE-4952] Introduce a simplistic RelMetadataQuery option
 add 184c57a  Site: Change the javadoc title to Apache Calcite API
 add cc40a48  [CALCITE-4968] Use TOP N for MsSQL instead of FETCH without 
OFFSET
 add 8983e7e  [CALCITE-4963] Make it easier to implement interface 
SqlDia

svn commit: r52862 - /release/calcite/KEYS

2022-03-04 Thread zabetak
Author: zabetak
Date: Fri Mar  4 20:49:03 2022
New Revision: 52862

Log:
Add signing keys for Liya Fan and Jess Balint

Modified:
release/calcite/KEYS

Modified: release/calcite/KEYS
==
--- release/calcite/KEYS (original)
+++ release/calcite/KEYS Fri Mar  4 20:49:03 2022
@@ -2161,3 +2161,119 @@ PF+gdmGg8CXa9qwpUTo2Z+kGCuWOXlVwu1vU5e53
 I0tCrK6brUzqJwPi8Vk=
 =++rr
 -END PGP PUBLIC KEY BLOCK-
+pub   rsa4096 2022-03-03 [SC]
+  F90563572D7E336E0A0AD0957F99D3070C037870
+uid   [ultimate] Liya Fan (CODE SIGNING KEY) 
+sig 37F99D3070C037870 2022-03-03  Liya Fan (CODE SIGNING KEY) 

+sub   rsa4096 2022-03-03 [E]
+sig  7F99D3070C037870 2022-03-03  Liya Fan (CODE SIGNING KEY) 

+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBGIgv6wBEAC9IyK+1Md1KjvsW3AvumAzYmIiMIwvYvTQxy8bhnjZ66bAmDif
+Bi2HySQMPRMuTu2LLayAM6yoxh8h4O1cvx1QUQyGGXxPHIHZEWDPBHL26/UkUETz
+v5LRIaacoAeVSIkFSdXPvnhd+fRAwaXLudy+1gl1x+23UI4LRSEnpDOpvZpP+Byj
+TQl7MIQot9Iw2brztOCSdRAv8g2n5E1QEF81JhMukQgZG8Y5l+nHDufN3J+IO5lz
+o0auJOpJIBy/DnM/hmO6qlt9rvLWYHktUM5mTdzJdLEx/4f7PtEvB/pQusXQDk11
+qzklyNrOQ97mSc6oX9fbxaF0H+rxVYWxdgdam8wIi+yrsrop1OzrVDQD7ksXzckd
+9DzuPVj1Pez+MQQuHU/w1kyV3id46CvWdKYuiTGYXoGYbsbwc9nSa4fYoJ0ep0w+
+2xYdwsEswQbex29ZtLR34rZLp0V3jn8TWvrl936xlhx6KlwXss3JJZRPGh7dVGow
+SV3s1m5TbdyX7z3t1jXmEUvIYKtY07Se9vlRASmyaKV3E0P2OXrL3NQ76gGveAK8
+FWRolkypr6gxhEKVR7WkQ3N3jbjF6jLHt5Xw77Mquv+KTRELP4kaZlcLkIa+q6k/
+ZM1lDcSHmt1sCZIh3X6ydZHXpHSka8kCMQc1Rd2+AOiPw8+UeR+StGQBnwARAQAB
+tDBMaXlhIEZhbiAoQ09ERSBTSUdOSU5HIEtFWSkgPGxpeWFmYW5AYXBhY2hlLm9y
+Zz6JAlIEEwEIADwWIQT5BWNXLX4zbgoK0JV/mdMHDAN4cAUCYiC/rAIbAwULCQgH
+AgMiAgEGFQoJCAsCBBYCAwECHgcCF4AACgkQf5nTBwwDeHDseA/9G4ZyGu7ESbg/
+3mDmYDOKB58l+pCde2MOeam2gcCOe3zU5Rd3y6Bl10pLGUodydpBEQzHdUsSXYUM
+LjiX5VJlh46Ff46OvXHvl2XQWV8VsRF/CNGANV/DsVKCcsHY7JOd2euA9+5h91r7
+wHzZNayGiNkpI+4Ey1foidcPNY9aGPkuM5TEjVp5CdwQThD7kmMRAsG8k39Io+2s
+Zi1/n2Zlm6vzJAyXqR+PujWavuSgA/7aa63CzyAGTysyKZKk/fEZKhMXFKrfBcTD
+YYqDUYzvjTUfBHAd6AJRW3np3hhgLzVvrZQrdPcCKLAc8iXfUzbFiP17zg8LBKWF
+tye7cC5cez1YJS59hO49gbhVD+iZwP/e8HRPzbw+KRHSVjTmrBmD9Ev42W7g1nRF
+k2rSlrdkxclulRITAGY0TPxLB46AgS3lkBgYE16lkkWUR30u4H4efXIV8sIuI2vC
+3ZE+DPGtAqpwF23bpyLOnxtrBeB2vhfcXwCZOew/D3crN5Dr15N7qdXB0NIbgUCi
+1S3+H/c31XxrFfGil9MW6845iF65O63joEMLhlMVUdoI+S5nVFMpOjkPn+BuQTZf
+Zz/GcLsipGdV4T6uwKR0bReljcQkPh7KKcUysQYPTqrerOL6/ry6cmDHXfM85jxZ
+TYGpObbELstRHOmAzwPDSapfxuraLZC5Ag0EYiC/rAEQAMmKOEnMgeG9/acExhDE
+XpiktQO5mjLn7lV5h1N3ZF1eMquSiRZO+HaGHPlUxhgncNgkPlZRK6v+x3dPq3iC
++AdWCjb/8zf43S2pjiFJUNxKsI+iuWQNb+eLW2elqwoxs2Og0AMyOcPYcE/6jgLK
+/mWN03b4xoA6EjSYWItKInt2T7UYj/mswPiPLtY9uMnuCUB+MtlanSinwoHD7EfS
+lbNAV0OwZItAYiunr3q7Vqhd8YybdlM64aHh+oJgRLAPHRBitiZRG4NOb6ps+xyK
+QnYQz1UkJ5nfC6qHoVZgSW0iRV/4S1Kphjn1oKqd/d67vxn6CzVTXluhOxk5b2Es
+1cEQEk99PjgimK1m5DEvy3jWuZhRRe2Ucr/GvjMll2kwLUT7pH+CmhwlSAetHhTZ
+J8UVCNDwdKbgo2aNFq8q4tdSfuVg6iigcEY2P71SdYWTcWCEt8m4WspJI+JlXC9U
+9hSExdX+WFk62vtuxaLhrHAKhIBnh/JE/7ICtF8dqkxpVfV+4eDjljhIoCflJQub
+acbPVvDEN2qqqkp7mQMpD+T+dzscOKRiE29knZBWeTjR6pRy2QJQPPIoPiXFIJKU
+2ZyWjsi3t8Dwh1Y0PGXR5+p6lMTc0FU0yO518ILXG1MhWBG4mtm5+PLWC8ELPKKX
+O69LzJ5CkOssJc0KAxsdUIhrABEBAAGJAjYEGAEIACAWIQT5BWNXLX4zbgoK0JV/
+mdMHDAN4cAUCYiC/rAIbDAAKCRB/mdMHDAN4cL0lEACJww3Kso6hb+azbnnaW/hm
+SOxO5I9T8zF18wISLeSBvs7+Fppm0bqPSrrol5TW6xo/PdCco6ggZWBjkHoeFFQU
+r+lubT9fb/mHifSB6t3wjE2WzcUba3arOeMHb6e+yXqd+zV6imJneG/Db6ZMV4QX
+lKLwu+CoOXY2ZpO/CGFhseOJuAEnscP2wcmdzfJJRfg7DxAfRyC3VC6PrjOi/sqh
+QSrqckWNxaFLIOKuG+U5MhrMm9KGxjrpOD8wZDQuwxVMXNV2qvL8b7HOwBnO+3A2
+gfvkdjpkNYdjtaoAbozlqfK0VT/DkOEnEhIZfTCFoeOvHHFHgxGDBrAhZ9iwkBYV
+fPS1a70IfSbPuaTi86nBExMPFAL+YGqGeRO5QOZ+UqXVpuHt+eGTP2Pc0SYrsiFb
+iG4dyFYfYjTGlJ5vrAL+nEAsaDUYIuDogCluG8TmJuJfiEOmqvYMSKNn7tYzz86J
+/10ff8kJexOHchHMQTalHIGMFP5DWDmBLo7Ln7IT9OownU5o1m6Nt1kvatPjdaEj
+/TkwdwFKooDa0ymYGzObwx4RoRRSWQq7mW3iHv7Qg7ie2fUb9cq4RcUbTEWOQq7T
+6HacBjroocU1+U97e1h1TE7PhRvSqHkoVc6qTLX5XfSv/+C5GHWBVg6eeJUuIdG+
+hvIdeAZvzS6+7+NJgkL+Iw==
+=ZkTz
+-END PGP PUBLIC KEY BLOCK-
+pub   rsa4096 2014-06-24 [SC]
+  EC3E70DF53363C76AAA7BEC0C09BA4E5B0A8BB04
+uid   [ultimate] Jess Balint 
+sig 3C09BA4E5B0A8BB04 2014-06-24  Jess Balint 
+sub   rsa4096 2014-06-24 [E]
+sig  C09BA4E5B0A8BB04 2014-06-24  Jess Balint 
+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBFOp7IIBEADeVl1R4teWvWGl42mTXTnjvGsKKBq8NPxQxI3NkOHUBeaMemoY
+qiy9mr/p2Y85wqsxh2oieoYbnV7FQqLI6my9x42K3XM3nmwH2N0JFRqU4KEetemu
+CG9On8i7q1mfYz7rYYqcSwygjUGVfwqSH+NkirrRGz9gLTHav2s7/EzenuJz4XFV
+WbJyS8qgAhcnBG0D31dB/cfLjwaJa4oJ8Hd9J06kK1fvGraHZjTAbm5+8QGhUviN
+FGFnyI332RL4USWJ2o12/TOIjHh1gX/SDBQuSk4qqyZ0KR06Eka5fdepPIJpIuua
+dHO75QUB4rQ9Y/iY6aukFX/CUwwFNj743r+KBQzvXpkcDrMFWllBPvZ+mRGxtiD6
+pwLtb1JsqV2OkUcqRpp1+tbGusE6kNLkXR3F9URRJ3WUoFh/UFukulTOUI6phk7K
+sX7yPDU+NF/CeDCv1zQqyKXwPRkaIyIMNg32ODOAiHCsUws1ApWR5v6SbUx8JzHs
+P2Fke97C7Yb8UYirFZVZzPbI3ydHCInccRiW9FMmk9guyICqi0soXPBR1r8sa+ip
+JgKRQ6VwOUgz4d73AXR767qFDGURC/uy/Fyy+P+8MjSaEe4JVo/6xfZkmmwlnwA3
+UiV1FoJnioamEm3dgAHESLe9dX

[calcite] branch master updated (0e2dada -> 6febf78)

2022-03-03 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/calcite.git.


from 0e2dada  Site: Reorganise website update process in README & howto
 add 6febf78  [CALCITE-5030] Upgrade jsonpath version from 2.4.0 to 2.7.0

No new revisions were added by this update.

Summary of changes:
 gradle.properties | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


[calcite] branch master updated: Site: Reorganise website update process in README & howto

2022-03-03 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/master by this push:
 new 0e2dada  Site: Reorganise website update process in README & howto
0e2dada is described below

commit 0e2dada65ceee964972afdfc10d29bb15ad6705c
Author: Stamatis Zampetakis 
AuthorDate: Tue Feb 1 10:51:30 2022 +0100

Site: Reorganise website update process in README & howto

1. Highlight the role & existence of two repos at the beginning of
README.
2. Provide a high-level overview of the update procedure early on.
3. Move RM related instructions from README to the appropriate howto
sections.
4. Remove git reset/rebase related commands for syncing master & site
branches from the beginning of the release process cause it is too
early to rebase then. Mention rebase later towards the end of the
process.

Close apache/calcite#2708
---
 site/README.md  | 38 +++---
 site/_docs/howto.md |  7 ---
 2 files changed, 19 insertions(+), 26 deletions(-)

diff --git a/site/README.md b/site/README.md
index 90c5fa2..ec9c092 100644
--- a/site/README.md
+++ b/site/README.md
@@ -19,10 +19,21 @@ limitations under the License.
 
 # Apache Calcite docs site
 
-This directory contains the code for the Apache Calcite web site,
-[calcite.apache.org](https://calcite.apache.org/).
-
-You can build the site manually using your environment or use the docker 
compose file.
+This directory contains the sources/templates for generating the Apache 
Calcite website,
+[calcite.apache.org](https://calcite.apache.org/). The actual generated 
content of the website
+is present in the [calcite-site](https://github.com/apache/calcite-site) 
repository.
+
+We want to deploy project changes (for example, new committers, PMC members or 
upcoming talks)
+immediately, but we want to deploy documentation of project features only when 
that feature appears
+in a release.
+
+The procedure for deploying changes to the website is outlined below:
+1. Push the commit with the changes to the `master` branch of this repository.
+2. Cherry-pick the commit from the `master` branch to the `site` branch of 
this repository.
+3. Checkout the `site` branch and build the website either 
[manually](#manually) or using
+[docker-compose](#using-docker) (preferred).
+4. Commit the generated content to the `master` branch of the `calcite-site` 
repository following
+the [Pushing to site](#pushing-to-site) instructions.
 
 ## Manually
 
@@ -117,22 +128,3 @@ generate files to `site/target/avatica`, which becomes an
 [avatica](https://calcite.apache.org/avatica)
 sub-directory when deployed. See
 [Avatica site README](../avatica/site/README.md).
-
-## Site branch
-
-We want to deploy project changes (for example, new committers, PMC
-members or upcoming talks) immediately, but we want to deploy
-documentation of project features only when that feature appears in a
-release. For this reason, we generally edit the site on the "site" git
-branch.
-
-Before making a release, release manager must ensure that "site" is in
-sync with "master". Immediately after a release, the release manager
-will publish the site, including all of the features that have just
-been released. When making an edit to the site, a Calcite committer
-must commit the change to the git "master" branch (as well as
-git, to publish the site, of course). If the edit is to appear
-on the site immediately, the committer should then cherry-pick the
-change into the "site" branch.  If there have been no feature-related
-changes on the site since the release, then "site" should be a
-fast-forward merge of "master".
diff --git a/site/_docs/howto.md b/site/_docs/howto.md
index ecc9d50..c5d1df1 100644
--- a/site/_docs/howto.md
+++ b/site/_docs/howto.md
@@ -694,8 +694,7 @@ Before you start:
 * Set up signing keys as described above.
 * Make sure you are using JDK 8 (not 9 or 10).
 * Make sure `master` branch and `site` branch are in sync, i.e. there is no 
commit on `site` that has not
-  been applied also to `master`.
-  This can be achieved by doing `git switch site && git rebase --empty=drop 
master && git switch master && git reset --hard site`.
+  been applied also to `master`. If you spot missing commits then port them to 
`master`.
 * Check that `README` and `site/_docs/howto.md` have the correct version 
number.
 * Check that `site/_docs/howto.md` has the correct Gradle version.
 * Check that `NOTICE` has the current copyright year.
@@ -949,7 +948,9 @@ Add a release announcement by copying
 Generate the javadoc, and [preview](http://localhost:4000/news/) the site by 
following the
 instructions in [site/README.md]({{ site.sourceRoot }}/site/README.md). Ensure 
the a

[calcite] branch master updated: [CALCITE-5025] Upgrade commons-io version from 2.4 to 2.11.0

2022-02-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/master by this push:
 new 20ca53c  [CALCITE-5025] Upgrade commons-io version from 2.4 to 2.11.0
20ca53c is described below

commit 20ca53c962b1642ac4cda32ffdf1294042e951a8
Author: Scott Reynolds 
AuthorDate: Sat Feb 26 18:51:48 2022 -0800

[CALCITE-5025] Upgrade commons-io version from 2.4 to 2.11.0

commons-io versions before 2.7 suffer from CVE-2021-29425 which allows
to traverse into the parent directory.

Update to a more recent version to avoid the afforementioned security
vulnerability and benefit from the other improvements in this library.

Close apache/calcite#2734
---
 gradle.properties | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gradle.properties b/gradle.properties
index b010dca..76e3b7e 100644
--- a/gradle.properties
+++ b/gradle.properties
@@ -87,7 +87,7 @@ cassandra-unit.version=4.3.1.0
 chinook-data-hsqldb.version=0.1
 commons-codec.version=1.13
 commons-dbcp2.version=2.6.0
-commons-io.version=2.4
+commons-io.version=2.11.0
 commons-lang3.version=3.8
 commons-pool2.version=2.6.2
 dropwizard-metrics.version=4.0.5


[hive] branch master updated: HIVE-25970: Missing messages in HS2 operation logs (Stamatis Zampetakis, reviewed by Zoltan Haindrich)

2022-02-25 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new d3cd596  HIVE-25970: Missing messages in HS2 operation logs (Stamatis 
Zampetakis, reviewed by Zoltan Haindrich)
d3cd596 is described below

commit d3cd596aa15ebedd58f99628d43a03eb2f5f3909
Author: Stamatis Zampetakis 
AuthorDate: Wed Feb 23 16:02:27 2022 +0100

HIVE-25970: Missing messages in HS2 operation logs (Stamatis Zampetakis, 
reviewed by Zoltan Haindrich)

Revert HIVE-22753 (commit 6a5c0cd04a2e88a545a96d10a942c86b2be18daa).

Preventing the creation of the appender (by returning null) leads to
the message triggering the creation to be lost forever. Moreover, the
memory leak that was observed in HIVE-22753 is no longer feasible with
the fix for HIVE-24590 so the revert is completely safe.

Closes #3048
---
 .../ql/log/HushableRandomAccessFileAppender.java   | 32 --
 1 file changed, 32 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/log/HushableRandomAccessFileAppender.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/log/HushableRandomAccessFileAppender.java
index 7e60435..0ff66df 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/log/HushableRandomAccessFileAppender.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/log/HushableRandomAccessFileAppender.java
@@ -20,11 +20,7 @@ package org.apache.hadoop.hive.ql.log;
 import java.io.Serializable;
 import java.util.HashMap;
 import java.util.Map;
-import java.util.concurrent.TimeUnit;
 
-import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.CacheLoader;
-import com.google.common.cache.LoadingCache;
 import org.apache.logging.log4j.core.Filter;
 import org.apache.logging.log4j.core.Layout;
 import org.apache.logging.log4j.core.LogEvent;
@@ -51,17 +47,6 @@ import org.apache.logging.log4j.core.util.Integers;
 public final class HushableRandomAccessFileAppender extends
 AbstractOutputStreamAppender {
 
-  private static final LoadingCache CLOSED_FILES =
-  CacheBuilder.newBuilder().maximumSize(1000)
-  .expireAfterWrite(1, TimeUnit.SECONDS)
-  .build(new CacheLoader() {
-@Override
-public String load(String key) throws Exception {
-  return key;
-}
-  });
-
-
   private final String fileName;
   private Object advertisement;
   private final Advertiser advertiser;
@@ -86,7 +71,6 @@ public final class HushableRandomAccessFileAppender extends
   @Override
   public void stop() {
 super.stop();
-CLOSED_FILES.put(fileName, fileName);
 if (advertiser != null) {
   advertiser.unadvertise(advertisement);
 }
@@ -188,22 +172,6 @@ public final class HushableRandomAccessFileAppender extends
   + name);
   return null;
 }
-
-/**
- * In corner cases (e.g exceptions), there seem to be some race between
- * com.lmax.disruptor.BatchEventProcessor and HS2 thread which is actually
- * stopping the logs. Because of this, same filename is recreated and
- * stop() would never be invoked on that instance, causing a mem leak.
- * To prevent same file being recreated within very short time,
- * CLOSED_FILES are tracked in cache with TTL of 1 second. This
- * also helps in avoiding the stale directories created.
- */
-if (CLOSED_FILES.getIfPresent(fileName) != null) {
-  // Do not create another file, which got closed in last 5 seconds
-  LOGGER.error(fileName + " was closed recently.");
-  return null;
-}
-
 if (layout == null) {
   layout = PatternLayout.createDefaultLayout();
 }


[hive] branch master updated: HIVE-25938: Print excluded rules from CBO (Alessandro Solimando, reviewed by Stamatis Zampetakis, John Sherman)

2022-02-22 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 3c532c2  HIVE-25938: Print excluded rules from CBO (Alessandro 
Solimando, reviewed by Stamatis Zampetakis, John Sherman)
3c532c2 is described below

commit 3c532c2cd2603edc60c721282d45390e910a0358
Author: Alessandro Solimando 
AuthorDate: Tue Feb 8 17:09:19 2022 +0100

HIVE-25938: Print excluded rules from CBO (Alessandro Solimando, reviewed 
by Stamatis Zampetakis, John Sherman)

Closes #3011
---
 .../apache/hadoop/hive/ql/exec/ExplainTask.java|  39 +--
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  24 -
 .../queries/clientpositive/excluded_rule_explain.q |  11 ++
 .../llap/excluded_rule_explain.q.out   | 112 +
 .../llap/rule_exclusion_config.q.out   |   8 ++
 5 files changed, 184 insertions(+), 10 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java
index c59f44f..59f9044 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java
@@ -101,8 +101,12 @@ public class ExplainTask extends Task 
implements Serializable {
   private static final Logger LOG = 
LoggerFactory.getLogger(ExplainTask.class.getName());
 
   public static final String STAGE_DEPENDENCIES = "STAGE DEPENDENCIES";
+  private static final String EXCLUDED_RULES_PREFIX = "Excluded rules: ";
   private static final long serialVersionUID = 1L;
   public static final String EXPL_COLUMN_NAME = "Explain";
+  private static final String CBO_INFO_JSON_LABEL = "cboInfo";
+  private static final String CBO_PLAN_JSON_LABEL = "CBOPlan";
+  private static final String CBO_PLAN_TEXT_LABEL = "CBO PLAN:";
   private final Set> visitedOps = new HashSet>();
   private boolean isLogical = false;
 
@@ -152,15 +156,22 @@ public class ExplainTask extends Task 
implements Serializable {
 return outJSONObject;
   }
 
-  public JSONObject getJSONCBOPlan(PrintStream out, ExplainWork work) throws 
Exception {
+  public JSONObject getJSONCBOPlan(PrintStream out, ExplainWork work) {
 JSONObject outJSONObject = new JSONObject(new LinkedHashMap<>());
 boolean jsonOutput = work.isFormatted();
 String cboPlan = work.getCboPlan();
 if (cboPlan != null) {
+  String ruleExclusionRegex = getRuleExcludedRegex();
   if (jsonOutput) {
-outJSONObject.put("CBOPlan", cboPlan);
+outJSONObject.put(CBO_PLAN_JSON_LABEL, cboPlan);
+if (!ruleExclusionRegex.isEmpty()) {
+  outJSONObject.put(CBO_INFO_JSON_LABEL, EXCLUDED_RULES_PREFIX + 
ruleExclusionRegex);
+}
   } else {
-out.println("CBO PLAN:");
+if (!ruleExclusionRegex.isEmpty()) {
+  out.println(EXCLUDED_RULES_PREFIX + ruleExclusionRegex + "\n");
+}
+out.println(CBO_PLAN_TEXT_LABEL);
 out.println(cboPlan);
   }
 }
@@ -272,6 +283,8 @@ public class ExplainTask extends Task 
implements Serializable {
   boolean jsonOutput, boolean isExtended, boolean appendTaskType, String 
cboInfo,
   String cboPlan, String optimizedSQL, String stageIdRearrange) throws 
Exception {
 
+String ruleExclusionRegex = getRuleExcludedRegex();
+
 // If the user asked for a formatted output, dump the json output
 // in the output stream
 JSONObject outJSONObject = new JSONObject(new LinkedHashMap<>());
@@ -282,9 +295,15 @@ public class ExplainTask extends Task 
implements Serializable {
 
 if (cboPlan != null) {
   if (jsonOutput) {
-outJSONObject.put("CBOPlan", cboPlan);
+outJSONObject.put(CBO_PLAN_JSON_LABEL, cboPlan);
+if (!ruleExclusionRegex.isEmpty()) {
+  outJSONObject.put(CBO_INFO_JSON_LABEL, EXCLUDED_RULES_PREFIX + 
ruleExclusionRegex);
+}
   } else {
-out.print("CBO PLAN:");
+if (!ruleExclusionRegex.isEmpty()) {
+  out.println(EXCLUDED_RULES_PREFIX + ruleExclusionRegex);
+}
+out.print(CBO_PLAN_TEXT_LABEL);
 out.println(cboPlan);
   }
 }
@@ -327,6 +346,10 @@ public class ExplainTask extends Task 
implements Serializable {
 }
 
 if (!suppressOthersForVectorization) {
+  if (!jsonOutput && !ruleExclusionRegex.isEmpty()) {
+out.println(EXCLUDED_RULES_PREFIX + ruleExclusionRegex + "\n");
+  }
+
   JSONObject jsonDependencies = outputDependencies(out, jsonOutput, 
appendTaskType, ordered);
 
   if (out != null) {
@@ -335,7 +358,7 @@ public class ExplainTask extends Task 
implements Serializable {
 
   if (jsonOut

[hive] branch master updated: HIVE-25947: Compactor job queue cannot be set per table via compactor.mapred.job.queue.name (Stamatis Zampetakis, reviewed by Alessandro Solimando, Denys Kuzmenko)

2022-02-18 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 03ac88f  HIVE-25947: Compactor job queue cannot be set per table via 
compactor.mapred.job.queue.name (Stamatis Zampetakis, reviewed by Alessandro 
Solimando, Denys Kuzmenko)
03ac88f is described below

commit 03ac88f3fc619fbc521b256c61b887dd2e291d60
Author: Stamatis Zampetakis 
AuthorDate: Mon Jan 31 11:45:17 2022 +0100

HIVE-25947: Compactor job queue cannot be set per table via 
compactor.mapred.job.queue.name (Stamatis Zampetakis, reviewed by Alessandro 
Solimando, Denys Kuzmenko)

Adapt the MR compactor to accept all the properties below:
* compactor.mapred.job.queue.name
* compactor.mapreduce.job.queuename
* compactor.hive.compactor.job.queue

for specifying the job queue per table and per compaction. The change
restores backward compatibility and also enables the use of the non
deprecated MR properties.

Add unit tests defining and guarding the precedence among the
aforementioned properties and the different granularity at which a
queue can be defined.

Closes #3027
---
 .../hadoop/hive/ql/txn/compactor/CompactorMR.java  |   4 +-
 .../hive/ql/txn/compactor/CompactorUtil.java   |  45 ++--
 .../TestCompactorMRJobQueueConfiguration.java  | 262 +
 3 files changed, 292 insertions(+), 19 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java 
b/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java
index 01fdffa..64a6e4f 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java
@@ -147,8 +147,8 @@ public class CompactorMR {
   overrideTblProps(job, t.getParameters(), ci.properties);
 }
 
-String queueName = HiveConf.getVar(job, ConfVars.COMPACTOR_JOB_QUEUE);
-if (queueName != null && queueName.length() > 0) {
+String queueName = CompactorUtil.getCompactorJobQueueName(conf, ci, t);
+if (!queueName.isEmpty()) {
   job.setQueueName(queueName);
 }
 
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorUtil.java 
b/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorUtil.java
index 3644f9e..43781c1 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorUtil.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorUtil.java
@@ -22,16 +22,29 @@ import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.api.Table;
 import org.apache.hadoop.hive.metastore.txn.CompactionInfo;
 
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.ForkJoinPool;
 import java.util.concurrent.ForkJoinWorkerThread;
+import java.util.function.Function;
 
 import static java.lang.String.format;
 
 public class CompactorUtil {
   public static final String COMPACTOR = "compactor";
-  static final String COMPACTOR_PREFIX = "compactor.";
-  static final String MAPRED_QUEUE_NAME = "mapred.job.queue.name";
+  /**
+   * List of accepted properties for defining the compactor's job queue.
+   *
+   * The order is important and defines which property has precedence over the 
other if multiple properties are defined
+   * at the same time.
+   */
+  private static final List QUEUE_PROPERTIES = Arrays.asList(
+  "compactor." + HiveConf.ConfVars.COMPACTOR_JOB_QUEUE.varname,
+  "compactor.mapreduce.job.queuename",
+  "compactor.mapred.job.queue.name"
+  );
 
   public interface ThrowingRunnable {
 void run() throws E;
@@ -62,31 +75,29 @@ public class CompactorUtil {
* @param conf global hive conf
* @param ci compaction info object
* @param table instance of table
-   * @return name of the queue, can be null
+   * @return name of the queue
*/
   static String getCompactorJobQueueName(HiveConf conf, CompactionInfo ci, 
Table table) {
 // Get queue name from the ci. This is passed through
 // ALTER TABLE table_name COMPACT 'major' WITH OVERWRITE 
TBLPROPERTIES('compactor.hive.compactor.job.queue'='some_queue')
+List> propertyGetters = new ArrayList<>(2);
 if (ci.properties != null) {
   StringableMap ciProperties = new StringableMap(ci.properties);
-  String queueName = ciProperties.get(COMPACTOR_PREFIX + 
MAPRED_QUEUE_NAME);
-  if (queueName != null && queueName.length() > 0) {
-return queueName;
-  }
+  propertyGetters.add(ciProperties::get);
 }
-
-// Get queue name from the table properties
-String queueName = table.getParameters().get(COMPACTOR_PREFIX + 
MAPRED_Q

[calcite] branch master updated (5b2de4e -> 5111f0f)

2022-02-14 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/calcite.git.


from 5b2de4e  [CALCITE-4997] Keep APPROX_COUNT_DISTINCT in some SqlDialects
 add 89b7091  [CALCITE-5006] Gradle tasks for launching JDBC integration 
tests are not working
 add 5111f0f  [CALCITE-5007] Upgrade H2 database version to 2.1.210

No new revisions were added by this update.

Summary of changes:
 core/build.gradle.kts | 11 +++
 gradle.properties |  2 +-
 2 files changed, 8 insertions(+), 5 deletions(-)


[hive] branch master updated (a96c697 -> 2b7c3a8)

2022-02-11 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from a96c697  HIVE-25583: Support parallel load for HastTables - Interfaces 
(#2999) (Panagiotis Garefalakis reviewed by Ramesh Kumar)
 add 2b7c3a8  HIVE-25945: Upgrade H2 database version to 2.1.210 (Stamatis 
Zampetakis, reviewed by Zhihua Deng)

No new revisions were added by this update.

Summary of changes:
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


[hive] branch master updated (8a718a7 -> 0d1cfff)

2022-02-09 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 8a718a7  HIVE-25898: Compaction txn heartbeating after Worker timeout 
(Laszlo Vegh, reviewed by Denys Kuzmenko, Stamatis Zampetakis)
 add 0d1cfff  HIVE-25919: ClassCastException when pushing boolean column 
predicate in HBaseStorageHandler (Stamatis Zampetakis, reviewed by Laszlo Bodor)

No new revisions were added by this update.

Summary of changes:
 .../test/queries/positive/hbase_ppd_boolean_cols.q |  18 +++
 .../results/positive/hbase_ppd_boolean_cols.q.out  | 136 +
 .../hive/ql/index/IndexPredicateAnalyzer.java  |   6 +-
 3 files changed, 159 insertions(+), 1 deletion(-)
 create mode 100644 
hbase-handler/src/test/queries/positive/hbase_ppd_boolean_cols.q
 create mode 100644 
hbase-handler/src/test/results/positive/hbase_ppd_boolean_cols.q.out


[calcite] branch master updated: [CALCITE-4702] Error when executing query with GROUP BY constant via JDBC adapter

2022-02-03 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/master by this push:
 new 1d4f1b3  [CALCITE-4702] Error when executing query with GROUP BY 
constant via JDBC adapter
1d4f1b3 is described below

commit 1d4f1b394bfdba03c5538017e12ab2431b137ca9
Author: Soumyakanti Das 
AuthorDate: Wed Aug 11 10:00:46 2021 -0700

[CALCITE-4702] Error when executing query with GROUP BY constant via JDBC 
adapter

Add new method in SqlDialect controlling whether GROUP BY using
literals is supported. Note that the whole Postgres family returns
false by precaution; some literals may be supported by Postgres or some
derivation of it. We agreed the extra complexity needed handle those
special cases was not worth it so we decided to return false for all
kinds of literals.

Introduce a new rule to rewrite the GROUP BY using an inner join with a
dummy table for those dialects that do not support literals.

Add a rule based transformation step at the beginning of the rel to SQL
conversion and ensure callers are passing from there. This allows to
keep the aggregate constant transformation in a single place.

Add tests with GROUP BY and different types of literals.

Close apache/calcite#2482
---
 .../adapter/jdbc/JdbcToEnumerableConverter.java|   2 +-
 .../apache/calcite/rel/rel2sql/SqlImplementor.java |  17 ++-
 .../AggregateProjectConstantToDummyJoinRule.java   | 142 +
 .../java/org/apache/calcite/sql/SqlDialect.java|  20 +++
 .../calcite/sql/dialect/InformixSqlDialect.java|   8 ++
 .../calcite/sql/dialect/PostgresqlSqlDialect.java  |   4 +
 .../calcite/sql/dialect/RedshiftSqlDialect.java|   8 ++
 .../calcite/rel/rel2sql/RelToSqlConverterTest.java |  32 -
 .../org/apache/calcite/test/RelOptRulesTest.java   |  28 
 .../org/apache/calcite/test/RelOptRulesTest.xml|  75 +++
 .../adapter/spark/JdbcToSparkConverter.java|   2 +-
 11 files changed, 334 insertions(+), 4 deletions(-)

diff --git 
a/core/src/main/java/org/apache/calcite/adapter/jdbc/JdbcToEnumerableConverter.java
 
b/core/src/main/java/org/apache/calcite/adapter/jdbc/JdbcToEnumerableConverter.java
index 4d2961c..313a896 100644
--- 
a/core/src/main/java/org/apache/calcite/adapter/jdbc/JdbcToEnumerableConverter.java
+++ 
b/core/src/main/java/org/apache/calcite/adapter/jdbc/JdbcToEnumerableConverter.java
@@ -348,7 +348,7 @@ public class JdbcToEnumerableConverter
 new JdbcImplementor(dialect,
 (JavaTypeFactory) getCluster().getTypeFactory());
 final JdbcImplementor.Result result =
-jdbcImplementor.visitInput(this, 0);
+jdbcImplementor.visitRoot(this.getInput());
 return result.asStatement().toSqlString(dialect);
   }
 }
diff --git 
a/core/src/main/java/org/apache/calcite/rel/rel2sql/SqlImplementor.java 
b/core/src/main/java/org/apache/calcite/rel/rel2sql/SqlImplementor.java
index 1773a28..48575ba 100644
--- a/core/src/main/java/org/apache/calcite/rel/rel2sql/SqlImplementor.java
+++ b/core/src/main/java/org/apache/calcite/rel/rel2sql/SqlImplementor.java
@@ -19,6 +19,8 @@ package org.apache.calcite.rel.rel2sql;
 import org.apache.calcite.linq4j.Ord;
 import org.apache.calcite.linq4j.tree.Expressions;
 import org.apache.calcite.plan.RelOptUtil;
+import org.apache.calcite.plan.hep.HepPlanner;
+import org.apache.calcite.plan.hep.HepProgramBuilder;
 import org.apache.calcite.rel.RelCollation;
 import org.apache.calcite.rel.RelFieldCollation;
 import org.apache.calcite.rel.RelNode;
@@ -29,6 +31,7 @@ import org.apache.calcite.rel.core.CorrelationId;
 import org.apache.calcite.rel.core.JoinRelType;
 import org.apache.calcite.rel.core.Project;
 import org.apache.calcite.rel.core.Window;
+import org.apache.calcite.rel.rules.AggregateProjectConstantToDummyJoinRule;
 import org.apache.calcite.rel.type.RelDataType;
 import org.apache.calcite.rel.type.RelDataTypeFactory;
 import org.apache.calcite.rel.type.RelDataTypeField;
@@ -161,8 +164,20 @@ public abstract class SqlImplementor {
 
   /** Visits a relational expression that has no parent. */
   public final Result visitRoot(RelNode r) {
+RelNode best;
+if (!this.dialect.supportsGroupByLiteral()) {
+  HepProgramBuilder hepProgramBuilder = new HepProgramBuilder();
+  hepProgramBuilder.addRuleInstance(
+  AggregateProjectConstantToDummyJoinRule.Config.DEFAULT.toRule());
+  HepPlanner hepPlanner = new HepPlanner(hepProgramBuilder.build());
+
+  hepPlanner.setRoot(r);
+  best = hepPlanner.findBestExp();
+} else {
+  best = r;
+}
 try {
-  return visitInput(holder(r), 0);
+  return visitInput(holder(best), 0);
 } catch (Error | RuntimeException e) {
   throw Util.throwAsRuntime("Error while converting RelNode to SqlNo

[hive] branch master updated: Revert "HIVE-25887 - Add external_table_concatenate.q to testconfiguration.properties. (#2959) (Harish Jaiprakash, reviewed by Naveen Gangam)"

2022-02-02 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 8a5be52  Revert "HIVE-25887 - Add external_table_concatenate.q to 
testconfiguration.properties. (#2959) (Harish Jaiprakash, reviewed by Naveen 
Gangam)"
8a5be52 is described below

commit 8a5be527737d57da79b659223e8f83cbec64ce54
Author: Stamatis Zampetakis 
AuthorDate: Wed Feb 2 11:56:52 2022 +0100

Revert "HIVE-25887 - Add external_table_concatenate.q to 
testconfiguration.properties. (#2959) (Harish Jaiprakash, reviewed by Naveen 
Gangam)"

This reverts commit 28dc8c17a49f861ec03689369c981097b0daa5d6.
---
 itests/src/test/resources/testconfiguration.properties | 1 -
 1 file changed, 1 deletion(-)

diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index 6687e3c..d12eca6 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -65,7 +65,6 @@ minillap.query.files=\
   except_distinct.q,\
   explainanalyze_acid_with_direct_insert.q,\
   explainuser_2.q,\
-  external_table_concatenate.q,\
   external_table_purge.q,\
   external_table_with_space_in_location_path.q,\
   file_with_header_footer.q,\


[hive] branch master updated (684ab88 -> d2c6769)

2022-02-01 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 684ab88  HIVE-25902: Vectorized reading of Parquet tables via Iceberg 
(Adam Szita, reviewed by Marton Bod and Peter Vary)
 add d2c6769  HIVE-25909: Add test for 'hive.default.nulls.last' property 
for windows with ordering (Alessandro Solimando, reviewed by Stamatis 
Zampetakis)

No new revisions were added by this update.

Summary of changes:
 ql/src/test/queries/clientpositive/order_null2.q   |  60 ++
 .../results/clientpositive/llap/order_null2.q.out  | 629 +
 2 files changed, 689 insertions(+)
 create mode 100644 ql/src/test/queries/clientpositive/order_null2.q
 create mode 100644 ql/src/test/results/clientpositive/llap/order_null2.q.out


[hive] branch master updated: HIVE-25917: Use default value for 'hive.default.nulls.last' when config is not available (Alessandro Solimando, reviewed by Stamatis Zampetakis)

2022-01-31 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new e806c8e  HIVE-25917: Use default value for 'hive.default.nulls.last' 
when config is not available (Alessandro Solimando, reviewed by Stamatis 
Zampetakis)
e806c8e is described below

commit e806c8e04de5ae9171d43d9d872fbe81a2006f96
Author: Alessandro Solimando 
AuthorDate: Thu Jan 27 10:35:57 2022 +0100

HIVE-25917: Use default value for 'hive.default.nulls.last' when config is 
not available (Alessandro Solimando, reviewed by Stamatis Zampetakis)

Closes #2990
---
 parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 
b/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
index 70f9d69..ce63d0b 100644
--- a/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
+++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
@@ -863,7 +863,7 @@ import org.apache.hadoop.hive.conf.HiveConf;
   }
   protected boolean nullsLast() {
 if(hiveConf == null){
-  return false;
+  return HiveConf.ConfVars.HIVE_DEFAULT_NULLS_LAST.defaultBoolVal;
 }
 return HiveConf.getBoolVar(hiveConf, 
HiveConf.ConfVars.HIVE_DEFAULT_NULLS_LAST);
   }


[calcite-site] branch master updated: Site: Add external resources section in the community page

2022-01-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/calcite-site.git


The following commit(s) were added to refs/heads/master by this push:
 new a6fa17e  Site: Add external resources section in the community page
a6fa17e is described below

commit a6fa17e01dfdf41f7dc93fa06355710a8dcfaec9
Author: Stamatis Zampetakis 
AuthorDate: Sat Jan 29 00:22:48 2022 +0100

Site: Add external resources section in the community page
---
 community/index.html | 37 +
 1 file changed, 37 insertions(+)

diff --git a/community/index.html b/community/index.html
index 45961ca..7bbe54d 100644
--- a/community/index.html
+++ b/community/index.html
@@ -86,6 +86,7 @@
   More 
talks
 
   
+  External resources
 
 
 Upcoming talks
@@ -644,6 +645,42 @@ and https://beam-summit.firebaseapp.com/schedule/;>Beam Summit Europe 2
 https://www.slideshare.net/julianhyde/how-to-integrate-splunk-with-any-data-solution;>How
 to integrate Splunk with any data solution (Splunk User Conference, 
2012)
 
 
+External resources
+
+A collection of articles, blogs, presentations, and interesting projects 
related to Apache Calcite.
+
+If you have something interesting to share with the community drop us an 
email on the dev list or
+consider creating a pull request on GitHub. If you just finished a cool 
project using Calcite
+consider writing a short article about it for our news section.
+
+
+  
+https://datalore.jetbrains.com/view/notebook/JYTVfn90xYSmv6U5f2NIQR;>Building
 a new Calcite frontend (GraphQL) (Gavin Ray, 2022)
+  
+https://github.com/ieugen/calcite-clj;>Write Calcite adapters in 
Clojure (Ioan Eugen Stan, 2022)
+  
+https://www.querifylabs.com/blog/cross-product-suppression-in-join-order-planning;>Cross-Product
 Suppression in Join Order Planning (Vladimir Ozerov, 2021)
+  
+https://www.querifylabs.com/blog/metadata-management-in-apache-calcite;>Metadata
 Management in Apache Calcite (Roman Kondakov, 2021)
+  
+https://www.querifylabs.com/blog/relational-operators-in-apache-calcite;>Relational
 Operators in Apache Calcite (Vladimir Ozerov, 2021)
+  
+https://www.querifylabs.com/blog/introduction-to-the-join-ordering-problem;>Introduction
 to the Join Ordering Problem (Alexey Goncharuk, 2021)
+  
+https://www.querifylabs.com/blog/what-is-cost-based-optimization;>What is 
Cost-based Optimization? (Alexey Goncharuk, 2021)
+  
+https://www.querifylabs.com/blog/memoization-in-cost-based-optimizers;>Memoization
 in Cost-based Optimizers (Vladimir Ozerov, 2021)
+  
+https://www.querifylabs.com/blog/rule-based-query-optimization;>Rule-based
 Query Optimization (Vladimir Ozerov, 2021)
+  
+https://www.querifylabs.com/blog/custom-traits-in-apache-calcite;>Custom 
traits in Apache Calcite (Vladimir Ozerov, 2020)
+  
+https://www.querifylabs.com/blog/assembling-a-query-optimizer-with-apache-calcite;>Assembling
 a query optimizer with Apache Calcite (Vladimir Ozerov, 2020)
+  
+https://github.com/michaelmior/calcite-notebooks;>A series of Jupyter 
notebooks to demonstrate the functionality of Apache Calcite (Michael 
Mior)
+  https://github.com/pingcap/awesome-database-learning;>A curated 
collection of resources about databases
+
+
   
 
 


[calcite] branch site updated: Site: Add external resources section in the community page

2022-01-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch site
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/site by this push:
 new fabef05  Site: Add external resources section in the community page
fabef05 is described below

commit fabef057c536d56e10530b399543077abad03a24
Author: Jing Zhang 
AuthorDate: Thu Jan 27 17:01:14 2022 +0800

Site: Add external resources section in the community page

Close apache/calcite#2703
---
 site/community/index.md | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/site/community/index.md b/site/community/index.md
index e35bd4f..67bfaee 100644
--- a/site/community/index.md
+++ b/site/community/index.md
@@ -204,3 +204,25 @@ As Hadoop Summit, Dublin, 2016
 * https://github.com/julianhyde/share/blob/master/slides/optiq-nosql-now-2013.pdf?raw=true;>SQL
 Now! (NoSQL Now! conference, 2013)
 * https://github.com/julianhyde/share/blob/master/slides/optiq-drill-user-group-2013.pdf?raw=true;>Drill
 / SQL / Optiq (2013)
 * https://www.slideshare.net/julianhyde/how-to-integrate-splunk-with-any-data-solution;>How
 to integrate Splunk with any data solution (Splunk User Conference, 2012)
+
+# External resources
+
+A collection of articles, blogs, presentations, and interesting projects 
related to Apache Calcite.
+
+If you have something interesting to share with the community drop us an email 
on the dev list or
+consider creating a pull request on GitHub. If you just finished a cool 
project using Calcite
+consider writing a short article about it for our [news section]({{ 
site.baseurl }}/news/index.html).
+
+* https://datalore.jetbrains.com/view/notebook/JYTVfn90xYSmv6U5f2NIQR;>Building
 a new Calcite frontend (GraphQL) (Gavin Ray, 2022)
+* https://github.com/ieugen/calcite-clj;>Write Calcite adapters in 
Clojure (Ioan Eugen Stan, 2022)
+* https://www.querifylabs.com/blog/cross-product-suppression-in-join-order-planning;>Cross-Product
 Suppression in Join Order Planning (Vladimir Ozerov, 2021)
+* https://www.querifylabs.com/blog/metadata-management-in-apache-calcite;>Metadata
 Management in Apache Calcite (Roman Kondakov, 2021)
+* https://www.querifylabs.com/blog/relational-operators-in-apache-calcite;>Relational
 Operators in Apache Calcite (Vladimir Ozerov, 2021)
+* https://www.querifylabs.com/blog/introduction-to-the-join-ordering-problem;>Introduction
 to the Join Ordering Problem (Alexey Goncharuk, 2021)
+* https://www.querifylabs.com/blog/what-is-cost-based-optimization;>What is 
Cost-based Optimization? (Alexey Goncharuk, 2021)
+* https://www.querifylabs.com/blog/memoization-in-cost-based-optimizers;>Memoization
 in Cost-based Optimizers (Vladimir Ozerov, 2021)
+* https://www.querifylabs.com/blog/rule-based-query-optimization;>Rule-based
 Query Optimization (Vladimir Ozerov, 2021)
+* https://www.querifylabs.com/blog/custom-traits-in-apache-calcite;>Custom 
traits in Apache Calcite (Vladimir Ozerov, 2020)
+* https://www.querifylabs.com/blog/assembling-a-query-optimizer-with-apache-calcite;>Assembling
 a query optimizer with Apache Calcite (Vladimir Ozerov, 2020)
+* https://github.com/michaelmior/calcite-notebooks;>A series of 
Jupyter notebooks to demonstrate the functionality of Apache Calcite 
(Michael Mior)
+* https://github.com/pingcap/awesome-database-learning;>A curated 
collection of resources about databases


[calcite] branch master updated: Site: Add external resources section in the community page

2022-01-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/master by this push:
 new 8b55e31  Site: Add external resources section in the community page
8b55e31 is described below

commit 8b55e31f6e3ff772483e06983fe26e88814c368e
Author: Jing Zhang 
AuthorDate: Thu Jan 27 17:01:14 2022 +0800

Site: Add external resources section in the community page

Close apache/calcite#2703
---
 site/community/index.md | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/site/community/index.md b/site/community/index.md
index e35bd4f..67bfaee 100644
--- a/site/community/index.md
+++ b/site/community/index.md
@@ -204,3 +204,25 @@ As Hadoop Summit, Dublin, 2016
 * https://github.com/julianhyde/share/blob/master/slides/optiq-nosql-now-2013.pdf?raw=true;>SQL
 Now! (NoSQL Now! conference, 2013)
 * https://github.com/julianhyde/share/blob/master/slides/optiq-drill-user-group-2013.pdf?raw=true;>Drill
 / SQL / Optiq (2013)
 * https://www.slideshare.net/julianhyde/how-to-integrate-splunk-with-any-data-solution;>How
 to integrate Splunk with any data solution (Splunk User Conference, 2012)
+
+# External resources
+
+A collection of articles, blogs, presentations, and interesting projects 
related to Apache Calcite.
+
+If you have something interesting to share with the community drop us an email 
on the dev list or
+consider creating a pull request on GitHub. If you just finished a cool 
project using Calcite
+consider writing a short article about it for our [news section]({{ 
site.baseurl }}/news/index.html).
+
+* https://datalore.jetbrains.com/view/notebook/JYTVfn90xYSmv6U5f2NIQR;>Building
 a new Calcite frontend (GraphQL) (Gavin Ray, 2022)
+* https://github.com/ieugen/calcite-clj;>Write Calcite adapters in 
Clojure (Ioan Eugen Stan, 2022)
+* https://www.querifylabs.com/blog/cross-product-suppression-in-join-order-planning;>Cross-Product
 Suppression in Join Order Planning (Vladimir Ozerov, 2021)
+* https://www.querifylabs.com/blog/metadata-management-in-apache-calcite;>Metadata
 Management in Apache Calcite (Roman Kondakov, 2021)
+* https://www.querifylabs.com/blog/relational-operators-in-apache-calcite;>Relational
 Operators in Apache Calcite (Vladimir Ozerov, 2021)
+* https://www.querifylabs.com/blog/introduction-to-the-join-ordering-problem;>Introduction
 to the Join Ordering Problem (Alexey Goncharuk, 2021)
+* https://www.querifylabs.com/blog/what-is-cost-based-optimization;>What is 
Cost-based Optimization? (Alexey Goncharuk, 2021)
+* https://www.querifylabs.com/blog/memoization-in-cost-based-optimizers;>Memoization
 in Cost-based Optimizers (Vladimir Ozerov, 2021)
+* https://www.querifylabs.com/blog/rule-based-query-optimization;>Rule-based
 Query Optimization (Vladimir Ozerov, 2021)
+* https://www.querifylabs.com/blog/custom-traits-in-apache-calcite;>Custom 
traits in Apache Calcite (Vladimir Ozerov, 2020)
+* https://www.querifylabs.com/blog/assembling-a-query-optimizer-with-apache-calcite;>Assembling
 a query optimizer with Apache Calcite (Vladimir Ozerov, 2020)
+* https://github.com/michaelmior/calcite-notebooks;>A series of 
Jupyter notebooks to demonstrate the functionality of Apache Calcite 
(Michael Mior)
+* https://github.com/pingcap/awesome-database-learning;>A curated 
collection of resources about databases


[calcite-site] 02/02: Site: Add "calcite-clj - Use Calcite with Clojure" in talks section

2022-01-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/calcite-site.git

commit 0df40dc3ae730d2efa0729903d6c4b9bdce4b0c5
Author: Stamatis Zampetakis 
AuthorDate: Fri Jan 28 23:26:04 2022 +0100

Site: Add "calcite-clj - Use Calcite with Clojure" in talks section
---
 community/index.html | 8 
 1 file changed, 8 insertions(+)

diff --git a/community/index.html b/community/index.html
index bd2da83..45961ca 100644
--- a/community/index.html
+++ b/community/index.html
@@ -67,6 +67,7 @@
   Help
   
 Talks
+  calcite-clj - Use 
Calcite with Clojure
   Morel, a 
functional query language (Julian Hyde)
   Building
 modern SQL query optimizers with Apache Calcite
   Apache Calcite Tutorial
@@ -518,6 +519,13 @@ The code is available on https://github.com/apache/calcite/tree/master;
 Watch some presentations and read through some slide decks about
 Calcite, or attend one of the upcoming talks.
 
+calcite-clj - Use Calcite with 
Clojure
+
+At https://www.meetup.com/Apache-Calcite/events/282836907/;>Apache 
Calcite Online Meetup January 2022
+https://ieugen.github.io/calcite-clj/;>[slides]
+https://www.youtube.com/watch?v=9CUWX8JHA90;>[video]
+https://github.com/ieugen/calcite-clj;>[code]
+
 Morel, a functional 
query language (Julian Hyde)
 
 At https://thestrangeloop.com/2021/morel-a-functional-query-language.html;>Strange
 Loop 2021,


[calcite-site] 01/02: Site: Fix typo in howto.md

2022-01-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/calcite-site.git

commit 0e31918c5f1259920f2d7c5c5abf0ab43209aaf2
Author: Stamatis Zampetakis 
AuthorDate: Fri Jan 28 23:25:15 2022 +0100

Site: Fix typo in howto.md
---
 docs/howto.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/howto.html b/docs/howto.html
index 6342c0b..3da73b9 100644
--- a/docs/howto.html
+++ b/docs/howto.html
@@ -218,7 +218,7 @@ environment, as follows.
 
 
   
--Dcalcite.test.db=DB (where db is h2, hsqldb, mysql, or postgresql) allows you
+-Dcalcite.test.db=DB (where DB is h2, hsqldb, mysql, or postgresql) allows you
 to change the JDBC data source for the test suite. Calciteā€™s test
 suite requires a JDBC data source populated with the foodmart data
 set.


[calcite-site] branch master updated (74e4896 -> 0df40dc)

2022-01-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/calcite-site.git.


from 74e4896  Site: Add Alessandro Solimando as committer
 new 0e31918  Site: Fix typo in howto.md
 new 0df40dc  Site: Add "calcite-clj - Use Calcite with Clojure" in talks 
section

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 community/index.html | 8 
 docs/howto.html  | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)


[calcite] branch site updated: Site: Add "calcite-clj - Use Calcite with Clojure" in talks section

2022-01-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch site
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/site by this push:
 new 48f4bf8  Site: Add "calcite-clj - Use Calcite with Clojure" in talks 
section
48f4bf8 is described below

commit 48f4bf8596ebfa0f7460ce9358d30028f268cb8e
Author: Eugen Stan 
AuthorDate: Fri Jan 28 12:02:09 2022 +0200

Site: Add "calcite-clj - Use Calcite with Clojure" in talks section

Close apache/calcite#2704
---
 site/community/index.md | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/site/community/index.md b/site/community/index.md
index eb31c55..e35bd4f 100644
--- a/site/community/index.md
+++ b/site/community/index.md
@@ -87,6 +87,13 @@ Want to learn more about Calcite?
 Watch some presentations and read through some slide decks about
 Calcite, or attend one of the [upcoming talks](#upcoming-talks).
 
+## calcite-clj - Use Calcite with Clojure
+
+At [Apache Calcite Online Meetup January 
2022](https://www.meetup.com/Apache-Calcite/events/282836907/)
+[[slides]](https://ieugen.github.io/calcite-clj/)
+[[video]](https://www.youtube.com/watch?v=9CUWX8JHA90)
+[[code]](https://github.com/ieugen/calcite-clj)
+
 ## Morel, a functional query language (Julian Hyde)
 
 At [Strange Loop 
2021](https://thestrangeloop.com/2021/morel-a-functional-query-language.html),


[calcite] branch master updated: Site: Add "calcite-clj - Use Calcite with Clojure" in talks section

2022-01-28 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/calcite.git


The following commit(s) were added to refs/heads/master by this push:
 new c80b948  Site: Add "calcite-clj - Use Calcite with Clojure" in talks 
section
c80b948 is described below

commit c80b948b5a60dbc4a4905b2104ba5b4bba41e006
Author: Eugen Stan 
AuthorDate: Fri Jan 28 12:02:09 2022 +0200

Site: Add "calcite-clj - Use Calcite with Clojure" in talks section

Close apache/calcite#2704
---
 site/community/index.md | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/site/community/index.md b/site/community/index.md
index eb31c55..e35bd4f 100644
--- a/site/community/index.md
+++ b/site/community/index.md
@@ -87,6 +87,13 @@ Want to learn more about Calcite?
 Watch some presentations and read through some slide decks about
 Calcite, or attend one of the [upcoming talks](#upcoming-talks).
 
+## calcite-clj - Use Calcite with Clojure
+
+At [Apache Calcite Online Meetup January 
2022](https://www.meetup.com/Apache-Calcite/events/282836907/)
+[[slides]](https://ieugen.github.io/calcite-clj/)
+[[video]](https://www.youtube.com/watch?v=9CUWX8JHA90)
+[[code]](https://github.com/ieugen/calcite-clj)
+
 ## Morel, a functional query language (Julian Hyde)
 
 At [Strange Loop 
2021](https://thestrangeloop.com/2021/morel-a-functional-query-language.html),


[hive] branch master updated: HIVE-25880: Add property to exclude CBO rules by a regex on their description (Alessandro Solimando, reviewed by Stamatis Zampetakis)

2022-01-20 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new b7eca8a  HIVE-25880: Add property to exclude CBO rules by a regex on 
their description (Alessandro Solimando, reviewed by Stamatis Zampetakis)
b7eca8a is described below

commit b7eca8ab5280c5b59d473b8c5fd98be8da5c1195
Author: Alessandro Solimando 
AuthorDate: Wed Jan 19 17:16:55 2022 +0100

HIVE-25880: Add property to exclude CBO rules by a regex on their 
description (Alessandro Solimando, reviewed by Stamatis Zampetakis)

Closes #2955
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |   4 +
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |   9 ++
 .../queries/clientpositive/rule_exclusion_config.q |  44 ++
 .../llap/rule_exclusion_config.q.out   | 150 +
 4 files changed, 207 insertions(+)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index a174653..6e4bbcc7 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -1883,6 +1883,10 @@ public class HiveConf extends Configuration {
  + " expressed 
as multiple of Local FS write cost"),
 HIVE_CBO_COST_MODEL_HDFS_READ("hive.cbo.costmodel.hdfs.read", "1.5", 
"Default cost of reading a byte from HDFS;"
  + " expressed 
as multiple of Local FS read cost"),
+HIVE_CBO_RULE_EXCLUSION_REGEX("hive.cbo.rule.exclusion.regex", "",
+"Regex over rule descriptions to exclude them from planning. "
++ "The intended usage is to allow to disable rules from 
problematic queries, it is *not* a performance tuning property. "
++ "The property is experimental, it can be changed or removed 
without any notice."),
 HIVE_CBO_SHOW_WARNINGS("hive.cbo.show.warnings", true,
  "Toggle display of CBO warnings like missing column stats"),
 
HIVE_CBO_STATS_CORRELATED_MULTI_KEY_JOINS("hive.cbo.stats.correlated.multi.key.joins",
 true,
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
index dc88027..ab4506e 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
@@ -1958,6 +1958,7 @@ public class CalcitePlanner extends SemanticAnalyzer {
 
   final boolean useMaterializedViewsRegistry = 
!conf.get(HiveConf.ConfVars.HIVE_SERVER2_MATERIALIZED_VIEWS_REGISTRY_IMPL.varname)
   .equals("DUMMY");
+  final String ruleExclusionRegex = 
conf.get(ConfVars.HIVE_CBO_RULE_EXCLUSION_REGEX.varname, "");
   final RelNode calcitePreMVRewritingPlan = basePlan;
   final Set tablesUsedQuery = getTablesUsed(basePlan);
 
@@ -2023,6 +2024,9 @@ public class CalcitePlanner extends SemanticAnalyzer {
   planner.addRule(new HivePartitionPruneRule(conf));
 
   // Optimize plan
+  if (!ruleExclusionRegex.isEmpty()) {
+
planner.setRuleDescExclusionFilter(Pattern.compile(ruleExclusionRegex));
+  }
   planner.setRoot(basePlan);
   basePlan = planner.findBestExp();
   // Remove view-based rewriting rules from planner
@@ -2416,6 +2420,8 @@ public class CalcitePlanner extends SemanticAnalyzer {
 RelMetadataProvider mdProvider, RexExecutor executorProvider,
 List materializations) {
 
+  final String ruleExclusionRegex = 
conf.get(ConfVars.HIVE_CBO_RULE_EXCLUSION_REGEX.varname, "");
+
   // Create planner and copy context
   HepPlanner planner = new HepPlanner(program,
   basePlan.getCluster().getPlanner().getContext());
@@ -2441,6 +2447,9 @@ public class CalcitePlanner extends SemanticAnalyzer {
 }
   }
 
+  if (!ruleExclusionRegex.isEmpty()) {
+
planner.setRuleDescExclusionFilter(Pattern.compile(ruleExclusionRegex));
+  }
   planner.setRoot(basePlan);
 
   return planner.findBestExp();
diff --git a/ql/src/test/queries/clientpositive/rule_exclusion_config.q 
b/ql/src/test/queries/clientpositive/rule_exclusion_config.q
new file mode 100644
index 000..2fb4418
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/rule_exclusion_config.q
@@ -0,0 +1,44 @@
+--! qt:dataset:src
+
+EXPLAIN CBO
+SELECT *
+FROM src src1
+  JOIN src src2 ON (src1.key = src2.key)
+  JOIN src src3 ON (src1.key = src3.key)
+WHERE src1.key > 10 and src1.key < 20;
+
+set hive.cbo.rule.exclusion.regex=HiveJoinPushTransitivePredica

[hive] branch master updated (459f8f5 -> 587c698)

2022-01-12 Thread zabetak
This is an automated email from the ASF dual-hosted git repository.

zabetak pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 459f8f5  HIVE-25843: Add flag to disable Iceberg FileIO config 
serialization (#2917) (Marton Bod, reviewed by Peter Vary)
 add 587c698  HIVE-25856: Intermittent null ordering in plans of queries 
with GROUP BY and LIMIT (Stamatis Zampetakis, reviewed by Krisztian Kasa)

No new revisions were added by this update.

Summary of changes:
 .../calcite/rules/HiveAggregateSortLimitRule.java  | 23 ++
 .../hadoop/hive/ql/parse/CalcitePlanner.java   |  9 --
 .../clientpositive/cbo_AggregateSortLimitRule.q|  5 +++
 .../llap/cbo_AggregateSortLimitRule.q.out  | 36 ++
 4 files changed, 50 insertions(+), 23 deletions(-)
 create mode 100644 
ql/src/test/queries/clientpositive/cbo_AggregateSortLimitRule.q
 create mode 100644 
ql/src/test/results/clientpositive/llap/cbo_AggregateSortLimitRule.q.out


<    1   2   3   4   5   6   7   >