[hive] branch branch-3.1 updated: HIVE-24965: Describe table partition stats fetch should be configurable(Kevin Cheung, reviewed by Sankar Hariappan)

2021-04-06 Thread sankarh
This is an automated email from the ASF dual-hosted git repository.

sankarh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new e497ada  HIVE-24965: Describe table partition stats fetch should be 
configurable(Kevin Cheung, reviewed by Sankar Hariappan)
e497ada is described below

commit e497ada70e04792d0dfbdcff75d68ecee8486c3e
Author: Kevin Cheung 
AuthorDate: Tue Apr 6 22:42:08 2021 -0700

HIVE-24965: Describe table partition stats fetch should be 
configurable(Kevin Cheung, reviewed by Sankar Hariappan)

Signed-off-by: Sankar Hariappan 
Closes (#2157)
---
 .../java/org/apache/hadoop/hive/conf/HiveConf.java |   3 +
 .../org/apache/hadoop/hive/ql/exec/DDLTask.java|   3 +-
 .../test/queries/clientpositive/describe_table.q   |   8 ++
 .../results/clientpositive/describe_table.q.out| 101 +
 4 files changed, 114 insertions(+), 1 deletion(-)

diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 8959ec1..0517dc0 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -4411,6 +4411,9 @@ public class HiveConf extends Configuration {
 "Comma-separated list of class names extending EventConsumer," +
  "to handle the NotificationEvents retreived by the notification event 
poll."),
 
+
HIVE_DESCRIBE_PARTITIONED_TABLE_IGNORE_STATS("hive.describe.partitionedtable.ignore.stats",
 false,
+"Disable partitioned table stats collection for 'DESCRIBE FORMATTED' 
or 'DESCRIBE EXTENDED' commands."),
+
 /* BLOBSTORE section */
 
 HIVE_BLOBSTORE_SUPPORTED_SCHEMES("hive.blobstore.supported.schemes", 
"s3,s3a,s3n",
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
index 2ec6576..2244c79 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
@@ -3671,7 +3671,8 @@ public class DDLTask extends Task implements 
Serializable {
 }
 
 if (descTbl.isExt() || descTbl.isFormatted()) {
-  if (tbl.isPartitioned() && part == null) {
+  boolean disablePartitionStats = 
conf.getBoolVar(HiveConf.ConfVars.HIVE_DESCRIBE_PARTITIONED_TABLE_IGNORE_STATS);
+  if (tbl.isPartitioned() && part == null && !disablePartitionStats) {
 // No partitioned specified for partitioned table, lets fetch all.
 Map tblProps = tbl.getParameters() == null ? new 
HashMap() : tbl.getParameters();
 Map valueMap = new HashMap<>();
diff --git a/ql/src/test/queries/clientpositive/describe_table.q 
b/ql/src/test/queries/clientpositive/describe_table.q
index 07fd6fc..8fd3bc8 100644
--- a/ql/src/test/queries/clientpositive/describe_table.q
+++ b/ql/src/test/queries/clientpositive/describe_table.q
@@ -34,6 +34,14 @@ alter table srcpart_serdeprops set 
serdeproperties('A1234'='3');
 describe formatted srcpart_serdeprops;
 drop table srcpart_serdeprops;
 
+CREATE TABLE IF NOT EXISTS desc_parttable_stats (somenumber int) PARTITIONED 
BY (yr int);
+INSERT INTO desc_parttable_stats values(0,1),(0,2),(0,3);
+set hive.describe.partitionedtable.ignore.stats=true;
+describe formatted desc_parttable_stats;
+set hive.describe.partitionedtable.ignore.stats=false;
+describe formatted desc_parttable_stats;
+DROP TABLE IF EXISTS desc_parttable_stats;
+
 CREATE DATABASE IF NOT EXISTS name1;
 CREATE DATABASE IF NOT EXISTS name2;
 use name1;
diff --git a/ql/src/test/results/clientpositive/describe_table.q.out 
b/ql/src/test/results/clientpositive/describe_table.q.out
index 8c7a16c..6f37bee 100644
--- a/ql/src/test/results/clientpositive/describe_table.q.out
+++ b/ql/src/test/results/clientpositive/describe_table.q.out
@@ -481,6 +481,107 @@ POSTHOOK: query: drop table srcpart_serdeprops
 POSTHOOK: type: DROPTABLE
 POSTHOOK: Input: default@srcpart_serdeprops
 POSTHOOK: Output: default@srcpart_serdeprops
+PREHOOK: query: CREATE TABLE IF NOT EXISTS desc_parttable_stats (somenumber 
int) PARTITIONED BY (yr int)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@desc_parttable_stats
+POSTHOOK: query: CREATE TABLE IF NOT EXISTS desc_parttable_stats (somenumber 
int) PARTITIONED BY (yr int)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@desc_parttable_stats
+PREHOOK: query: INSERT INTO desc_parttable_stats values(0,1),(0,2),(0,3)
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@desc_parttable_stats
+POSTHOOK: query: INSERT INTO desc_parttable_stats values(0,1),(0,2),(0,3)
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: 

[hive] branch master updated (2eb0e00 -> 46ddd5a)

2021-04-06 Thread kgyrtkirk
This is an automated email from the ASF dual-hosted git repository.

kgyrtkirk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


omit 2eb0e00  HIVE-24396: Additional feedback incorporated (Naveen Gangam)  
   Removed ReplicationSpec for connectors Notification 
event for alter connector removed some code.
omit 34720cf  HIVE-24396: Remaining comments from the feedback (Naveen 
Gangam)
omit cd90398  HIVE-24396: Conflict from rebase to master
omit 53edee9  HIVE-24396: Changes from additional feedback from code review 
(Naveen Gangam)
omit 07ebf02  HIVE-24396: qtest failure (Naveen Gangam)
omit 14283d3  HIVE-24396: Cleanup and one test failure (Naveen Gangam)
omit b9150e1  HIVE-24396: Incorporating feedback from the initial review 
(Naveen Gangam)
omit 5f5ec66  HIVE-24396: Duplicate SQL statements in derby upgrade script 
(Naveen Gangam)
omit d7a8eb7  HIVE-24396: Some changes with formatters after the rebase 
(Naveen Gangam)
omit 506621c  HIVE-24396: Fix for NPE in get_database_core with null 
catalog name
omit 2b4fa4e  HIVE-24396: Database name for remote table should be set to 
hive dbname not the scoped dbname
omit 4319d29  HIVE-24396: Fix in connector provider to return null instead 
of blank Table
omit 5c98e30  HIVE-24396: get_table_core() to return null instead of 
exception
omit 4779e86  HIVE-24396: Fix to CachedStore to make DBs NATIVE and fix to 
create_table_core on null DBs
omit 3e41782  HIVE-24396: Unhandled longvarchar and integer types for derby
omit 60ae013  HIVE-24396: Addressing test failures
omit 5a2236e  HIVE-24396: Build failure due to duplicate db definitions
omit 9937963  HIVE-24396 Moving create/drop/alter APIs to the interface. 
Reverting fix for case sensitivity
omit 0b60db9  Retain case on table names during query processing
omit d66d5fc  fix for 2 additional test failures
omit 23aafb7  Build issue with EventMessage
omit 3de6032  HIVE-24396: refactored code to Abstract class and providers 
share common code
omit a2a592f  Test failures with tez driver and duplicate error codes
omit df8cb11  HIVE-24396: Follow up test failure fixes
omit 683d0ae  HIVE-24396: qtest failures, regenerate them because of new 
columns
omit 7504491  HIVE-24396: Fix for drop database for remote databases
omit 6369daa  Missed change from the rebase
omit 7d91a9a  HIVE-24396: Added schema changes for Oracle Made 
DBS.TYPE NOT NULL in all scripts Added Type support to 
DatabaseBuilder Added Unit test for DataConnector Added 
Unit test REMOTE Database Fixed test failures in 
TestSchemaToolForMetaStore
omit 1523cd4  HIVE-24396: getTable/getTables API not expected to throw 
NoSuchObjectException
omit b30173d  Adding schema changes for mysql and postgres as well
omit 013a693  Adding a qtest and fixing type for default db
omit c6ed378  NullPointerException in CreateDatabaseOperation due to last 
change
omit 0b9a4f4  HIVE-24396: Build failure in itests due to unimplemented 
interface methods
omit 80013b4  Deleted commented out code and fixed location and IO classes
omit 91f1ccd  Added provider for postgres, refactored bunch of classes
omit d65307d  Implemented getTable and getTableNames for MYSQL (working)
omit 87c74ec  Adding DDL support for connectors 
(create/drop/show/desc/alter)
omit 69e5417  External metastore: clean after rebase

This update removed existing revisions from the reference, leaving the
reference pointing at a previous point in the repository history.

 * -- * -- N   refs/heads/master (46ddd5a)
\
 O -- O -- O   (2eb0e00)

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/hive/ql/ErrorMsg.java   | 3 -
 .../hcatalog/listener/DummyRawStoreFailEvent.java  |27 -
 .../hadoop/hive/ql/parse/AlterClauseParser.g   |31 -
 .../apache/hadoop/hive/ql/parse/CreateDDLParser.g  |42 -
 .../apache/hadoop/hive/ql/parse/HiveLexerParent.g  | 6 -
 .../org/apache/hadoop/hive/ql/parse/HiveParser.g   |35 +-
 .../hadoop/hive/ql/parse/IdentifiersParser.g   | 8 +-
 pom.xml| 3 -
 .../database/create/CreateDatabaseAnalyzer.java|37 +-
 .../ql/ddl/database/create/CreateDatabaseDesc.java |41 +-
 .../database/create/CreateDatabaseOperation.java   |24 +-
 .../ql/ddl/database/desc/DescDatabaseDesc.java | 6 +-
 .../ddl/database/desc/DescDatabaseFormatter.java   |23 +-
 .../ddl/database/desc/DescDatabaseOperation.java   |27 +-
 .../alter/AbstractAlterDataConnectorAnalyzer.java  |42 -
 .../alter/AbstractAlterDataConnectorDesc.java  |43 

[hive] 14/38: HIVE-24396: qtest failures, regenerate them because of new columns

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 683d0ae3d87546409dc0c7d4b29da0a318ea0c65
Author: Naveen Gangam 
AuthorDate: Fri Nov 20 21:53:54 2020 -0500

HIVE-24396: qtest failures, regenerate them because of new columns
---
 .../llap/alter_change_db_location.q.out|  5 ++-
 .../clientpositive/llap/alter_db_owner.q.out   |  6 +--
 .../llap/authorization_owner_actions_db.q.out  |  2 +-
 .../clientpositive/llap/database_location.q.out| 52 +-
 .../clientpositive/llap/database_properties.q.out  |  6 +--
 .../clientpositive/llap/db_ddl_explain.q.out   |  5 ++-
 .../clientpositive/llap/describe_database.q.out|  6 +--
 .../clientpositive/llap/unicode_comments.q.out |  4 +-
 .../hadoop/hive/metastore/TestHiveMetaStore.java   |  2 +-
 9 files changed, 49 insertions(+), 39 deletions(-)

diff --git 
a/ql/src/test/results/clientpositive/llap/alter_change_db_location.q.out 
b/ql/src/test/results/clientpositive/llap/alter_change_db_location.q.out
index 4c21153..2469cea 100644
--- a/ql/src/test/results/clientpositive/llap/alter_change_db_location.q.out
+++ b/ql/src/test/results/clientpositive/llap/alter_change_db_location.q.out
@@ -1,17 +1,18 @@
  A masked pattern was here 
 PREHOOK: type: CREATEDATABASE
 PREHOOK: Output: database:newDB
+PREHOOK: Output: hdfs://### HDFS PATH ###
  A masked pattern was here 
 POSTHOOK: type: CREATEDATABASE
 POSTHOOK: Output: database:newDB
- A masked pattern was here 
+POSTHOOK: Output: hdfs://### HDFS PATH ###
 PREHOOK: query: describe database extended newDB
 PREHOOK: type: DESCDATABASE
 PREHOOK: Input: database:newdb
 POSTHOOK: query: describe database extended newDB
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:newdb
-newdb  location/in/testhive_test_user  USER
+newdb  location/in/testhive_test_user  USER

 PREHOOK: query: use newDB
 PREHOOK: type: SWITCHDATABASE
 PREHOOK: Input: database:newdb
diff --git a/ql/src/test/results/clientpositive/llap/alter_db_owner.q.out 
b/ql/src/test/results/clientpositive/llap/alter_db_owner.q.out
index e7434ba..7e0316f 100644
--- a/ql/src/test/results/clientpositive/llap/alter_db_owner.q.out
+++ b/ql/src/test/results/clientpositive/llap/alter_db_owner.q.out
@@ -10,7 +10,7 @@ PREHOOK: Input: database:db_alter_onr
 POSTHOOK: query: describe database db_alter_onr
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:db_alter_onr
-db_alter_onr   location/in/testhive_test_user  USER
+db_alter_onr   location/in/testhive_test_user  USER

  A masked pattern was here 
 PREHOOK: type: ALTERDATABASE_OWNER
 PREHOOK: Output: database:db_alter_onr
@@ -40,7 +40,7 @@ PREHOOK: Input: database:db_alter_onr
 POSTHOOK: query: describe database db_alter_onr
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:db_alter_onr
-db_alter_onr   location/in/testuser1   USER
+db_alter_onr   location/in/testuser1   USER
  A masked pattern was here 
 PREHOOK: type: ALTERDATABASE_OWNER
 PREHOOK: Output: database:db_alter_onr
@@ -53,4 +53,4 @@ PREHOOK: Input: database:db_alter_onr
 POSTHOOK: query: describe database db_alter_onr
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:db_alter_onr
-db_alter_onr   location/in/testrole1   ROLE
+db_alter_onr   location/in/testrole1   ROLE
diff --git 
a/ql/src/test/results/clientpositive/llap/authorization_owner_actions_db.q.out 
b/ql/src/test/results/clientpositive/llap/authorization_owner_actions_db.q.out
index 0267f86..4b0f3f9 100644
--- 
a/ql/src/test/results/clientpositive/llap/authorization_owner_actions_db.q.out
+++ 
b/ql/src/test/results/clientpositive/llap/authorization_owner_actions_db.q.out
@@ -28,7 +28,7 @@ PREHOOK: Input: database:testdb
 POSTHOOK: query: desc database testdb
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:testdb
-testdb location/in/testtestroleROLE
+testdb location/in/testtestroleROLE
 PREHOOK: query: use testdb
 PREHOOK: type: SWITCHDATABASE
 PREHOOK: Input: database:testdb
diff --git a/ql/src/test/results/clientpositive/llap/database_location.q.out 
b/ql/src/test/results/clientpositive/llap/database_location.q.out
index 7969277..b998e7e 100644
--- a/ql/src/test/results/clientpositive/llap/database_location.q.out
+++ b/ql/src/test/results/clientpositive/llap/database_location.q.out
@@ -10,7 +10,7 @@ PREHOOK: Input: database:db1
 POSTHOOK: query: DESCRIBE DATABASE EXTENDED db1
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:db1
-db1location/in/testhive_test_user  USER
+db1

[hive] 25/38: HIVE-24396: Fix to CachedStore to make DBs NATIVE and fix to create_table_core on null DBs

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 4779e8680a05272f258375086e3fefcbd01ca3f0
Author: Naveen Gangam 
AuthorDate: Tue Dec 1 11:03:24 2020 -0500

HIVE-24396: Fix to CachedStore to make DBs NATIVE and fix to 
create_table_core on null DBs
---
 .../apache/hadoop/hive/metastore/HiveMetaStore.java   | 19 ++-
 .../hadoop/hive/metastore/cache/CachedStore.java  |  3 +++
 2 files changed, 13 insertions(+), 9 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
index 7288ca3..26552d1 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
@@ -2366,6 +2366,16 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   List processorCapabilities = req.getProcessorCapabilities();
   String processorId = req.getProcessorIdentifier();
 
+  // To preserve backward compatibility throw MetaException in case of 
null database
+  if (tbl.getDbName() == null) {
+throw new MetaException("Null database name is not allowed");
+  }
+
+  if (!MetaStoreUtils.validateName(tbl.getTableName(), conf)) {
+throw new InvalidObjectException(tbl.getTableName()
++ " is not a valid object name");
+  }
+
   Database db = get_database_core(tbl.getCatName(), tbl.getDbName());
   if (db != null && db.getType().equals(DatabaseType.REMOTE)) {
 
DataConnectorProviderFactory.getDataConnectorProvider(db).createTable(tbl);
@@ -2384,15 +2394,6 @@ public class HiveMetaStore extends ThriftHiveMetastore {
 tbl.unsetColStats();
   }
 
-  // To preserve backward compatibility throw MetaException in case of 
null database
-  if (tbl.getDbName() == null) {
-throw new MetaException("Null database name is not allowed");
-  }
-
-  if (!MetaStoreUtils.validateName(tbl.getTableName(), conf)) {
-throw new InvalidObjectException(tbl.getTableName()
-+ " is not a valid object name");
-  }
   String validate = 
MetaStoreServerUtils.validateTblColumns(tbl.getSd().getCols());
   if (validate != null) {
 throw new InvalidObjectException("Invalid column " + validate);
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
index 8ddaf4c..0a0f8fd 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
@@ -1102,6 +1102,9 @@ public class CachedStore implements RawStore, 
Configurable {
   }
 
   @Override public void createDatabase(Database db) throws 
InvalidObjectException, MetaException {
+if (db.getType() == null) {
+  db.setType(DatabaseType.NATIVE);
+}
 rawStore.createDatabase(db);
 // in case of event based cache update, cache will be updated during 
commit.
 if (!canUseEvents) {


[hive] 34/38: HIVE-24396: qtest failure (Naveen Gangam)

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 07ebf02bfe91fdac6029503928f421cfce87
Author: Naveen Gangam 
AuthorDate: Mon Mar 22 15:27:00 2021 -0400

HIVE-24396: qtest failure (Naveen Gangam)
---
 ql/src/test/results/clientpositive/beeline/escape_comments.q.out | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/ql/src/test/results/clientpositive/beeline/escape_comments.q.out 
b/ql/src/test/results/clientpositive/beeline/escape_comments.q.out
index 64b13f0..69fd771 100644
--- a/ql/src/test/results/clientpositive/beeline/escape_comments.q.out
+++ b/ql/src/test/results/clientpositive/beeline/escape_comments.q.out
@@ -42,14 +42,14 @@ PREHOOK: Input: database:escape_comments_db
 POSTHOOK: query: describe database extended escape_comments_db
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:escape_comments_db
-escape_comments_db a\nblocation/in/testuserUSER
+escape_comments_db a\nblocation/in/testuserUSER

 PREHOOK: query: describe database escape_comments_db
 PREHOOK: type: DESCDATABASE
 PREHOOK: Input: database:escape_comments_db
 POSTHOOK: query: describe database escape_comments_db
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:escape_comments_db
-escape_comments_db a\nblocation/in/testuserUSER
+escape_comments_db a\nblocation/in/testuserUSER

 PREHOOK: query: show create table escape_comments_tbl1
 PREHOOK: type: SHOW_CREATETABLE
 PREHOOK: Input: escape_comments_db@escape_comments_tbl1


[hive] 36/38: HIVE-24396: Conflict from rebase to master

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit cd90398b358556a732961e4b6289aa0e5bd06c81
Author: Naveen Gangam 
AuthorDate: Fri Mar 26 10:23:34 2021 -0400

HIVE-24396: Conflict from rebase to master
---
 .../gen/thrift/gen-cpp/hive_metastore_types.cpp| 384 -
 1 file changed, 72 insertions(+), 312 deletions(-)

diff --git 
a/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
 
b/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
index f854c48..24b915f 100644
--- 
a/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
+++ 
b/standalone-metastore/metastore-common/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
@@ -19008,24 +19008,7 @@ void swap(GetPartitionsByNamesRequest , 
GetPartitionsByNamesRequest ) {
   swap(a.__isset, b.__isset);
 }
 
-<<< HEAD
-GetPartitionsByNamesRequest::GetPartitionsByNamesRequest(const 
GetPartitionsByNamesRequest& other714) {
-  db_name = other714.db_name;
-  tbl_name = other714.tbl_name;
-  names = other714.names;
-  get_col_stats = other714.get_col_stats;
-  processorCapabilities = other714.processorCapabilities;
-  processorIdentifier = other714.processorIdentifier;
-  engine = other714.engine;
-  validWriteIdList = other714.validWriteIdList;
-  getFileMetadata = other714.getFileMetadata;
-  id = other714.id;
-  __isset = other714.__isset;
-}
-GetPartitionsByNamesRequest& GetPartitionsByNamesRequest::operator=(const 
GetPartitionsByNamesRequest& other715) {
-===
 GetPartitionsByNamesRequest::GetPartitionsByNamesRequest(const 
GetPartitionsByNamesRequest& other715) {
->>> HIVE-24396: Build failure due to duplicate db definitions
   db_name = other715.db_name;
   tbl_name = other715.tbl_name;
   names = other715.names;
@@ -25631,27 +25614,27 @@ void swap(CompactionRequest , CompactionRequest ) 
{
   swap(a.__isset, b.__isset);
 }
 
-CompactionRequest::CompactionRequest(const CompactionRequest& other930) {
-  dbname = other930.dbname;
-  tablename = other930.tablename;
-  partitionname = other930.partitionname;
-  type = other930.type;
-  runas = other930.runas;
-  properties = other930.properties;
-  initiatorId = other930.initiatorId;
-  initiatorVersion = other930.initiatorVersion;
-  __isset = other930.__isset;
-}
-CompactionRequest& CompactionRequest::operator=(const CompactionRequest& 
other931) {
-  dbname = other931.dbname;
-  tablename = other931.tablename;
-  partitionname = other931.partitionname;
-  type = other931.type;
-  runas = other931.runas;
-  properties = other931.properties;
-  initiatorId = other931.initiatorId;
-  initiatorVersion = other931.initiatorVersion;
-  __isset = other931.__isset;
+CompactionRequest::CompactionRequest(const CompactionRequest& other942) {
+  dbname = other942.dbname;
+  tablename = other942.tablename;
+  partitionname = other942.partitionname;
+  type = other942.type;
+  runas = other942.runas;
+  properties = other942.properties;
+  initiatorId = other942.initiatorId;
+  initiatorVersion = other942.initiatorVersion;
+  __isset = other942.__isset;
+}
+CompactionRequest& CompactionRequest::operator=(const CompactionRequest& 
other943) {
+  dbname = other943.dbname;
+  tablename = other943.tablename;
+  partitionname = other943.partitionname;
+  type = other943.type;
+  runas = other943.runas;
+  properties = other943.properties;
+  initiatorId = other943.initiatorId;
+  initiatorVersion = other943.initiatorVersion;
+  __isset = other943.__isset;
   return *this;
 }
 void CompactionRequest::printTo(std::ostream& out) const {
@@ -26783,47 +26766,47 @@ void swap(ShowCompactResponseElement , 
ShowCompactResponseElement ) {
   swap(a.__isset, b.__isset);
 }
 
-ShowCompactResponseElement::ShowCompactResponseElement(const 
ShowCompactResponseElement& other942) {
-  dbname = other942.dbname;
-  tablename = other942.tablename;
-  partitionname = other942.partitionname;
-  type = other942.type;
-  state = other942.state;
-  workerid = other942.workerid;
-  start = other942.start;
-  runAs = other942.runAs;
-  hightestTxnId = other942.hightestTxnId;
-  metaInfo = other942.metaInfo;
-  endTime = other942.endTime;
-  hadoopJobId = other942.hadoopJobId;
-  id = other942.id;
-  errorMessage = other942.errorMessage;
-  enqueueTime = other942.enqueueTime;
-  workerVersion = other942.workerVersion;
-  initiatorId = other942.initiatorId;
-  initiatorVersion = other942.initiatorVersion;
-  __isset = other942.__isset;
-}
-ShowCompactResponseElement& ShowCompactResponseElement::operator=(const 
ShowCompactResponseElement& other943) {
-  dbname = other943.dbname;
-  tablename = other943.tablename;
-  partitionname = other943.partitionname;
-  type = other943.type;
-  state = other943.state;
-  workerid = other943.workerid;
-  start = other943.start;
-  runAs = other943.runAs;
-  

[hive] 38/38: HIVE-24396: Additional feedback incorporated (Naveen Gangam) Removed ReplicationSpec for connectors Notification event for alter connector removed some code.

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 2eb0e00d5d614d3144519cf4861ec1759a373c7d
Author: Naveen Gangam 
AuthorDate: Fri Apr 2 16:02:20 2021 -0400

HIVE-24396: Additional feedback incorporated (Naveen Gangam)
Removed ReplicationSpec for connectors
Notification event for alter connector
removed some code.
---
 .../alter/AbstractAlterDataConnectorDesc.java  | 13 +
 .../alter/AbstractAlterDataConnectorOperation.java | 12 ++---
 .../owner/AlterDataConnectorSetOwnerAnalyzer.java  |  2 +-
 .../owner/AlterDataConnectorSetOwnerDesc.java  |  5 +-
 .../owner/AlterDataConnectorSetOwnerOperation.java |  4 +-
 .../AlterDataConnectorSetPropertiesAnalyzer.java   |  2 +-
 .../AlterDataConnectorSetPropertiesDesc.java   |  6 +--
 .../AlterDataConnectorSetPropertiesOperation.java  |  3 +-
 .../alter/url/AlterDataConnectorSetUrlDesc.java|  2 +-
 .../url/AlterDataConnectorSetUrlOperation.java |  3 +-
 .../drop/DropDataConnectorAnalyzer.java|  3 +-
 .../dataconnector/drop/DropDataConnectorDesc.java  | 13 +
 .../drop/DropDataConnectorOperation.java   | 22 +
 .../hive/metastore/MetaStoreEventListener.java |  8 +++
 .../hive/metastore/MetaStoreListenerNotifier.java  |  8 +++
 .../DataConnectorProviderFactory.java  | 12 -
 .../metastore/events/AlterDataConnectorEvent.java  | 57 ++
 17 files changed, 92 insertions(+), 83 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorDesc.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorDesc.java
index 281378f..2258161 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorDesc.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorDesc.java
@@ -21,7 +21,6 @@ package org.apache.hadoop.hive.ql.ddl.dataconnector.alter;
 import java.io.Serializable;
 
 import org.apache.hadoop.hive.ql.ddl.DDLDesc;
-import org.apache.hadoop.hive.ql.parse.ReplicationSpec;
 import org.apache.hadoop.hive.ql.plan.Explain;
 import org.apache.hadoop.hive.ql.plan.Explain.Level;
 
@@ -32,23 +31,13 @@ public abstract class AbstractAlterDataConnectorDesc 
implements DDLDesc, Seriali
   private static final long serialVersionUID = 1L;
 
   private final String connectorName;
-  private final ReplicationSpec replicationSpec;
 
-  public AbstractAlterDataConnectorDesc(String connectorName, ReplicationSpec 
replicationSpec) {
+  public AbstractAlterDataConnectorDesc(String connectorName) {
 this.connectorName = connectorName;
-this.replicationSpec = replicationSpec;
   }
 
   @Explain(displayName="name", explainLevels = {Level.USER, Level.DEFAULT, 
Level.EXTENDED })
   public String getConnectorName() {
 return connectorName;
   }
-
-  /**
-   * @return what kind of replication scope this alter is running under.
-   * This can result in a "ALTER IF NEWER THAN" kind of semantic
-   */
-  public ReplicationSpec getReplicationSpec() {
-return this.replicationSpec;
-  }
 }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorOperation.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorOperation.java
index 84093e7..01da755 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorOperation.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorOperation.java
@@ -43,17 +43,13 @@ public abstract class AbstractAlterDataConnectorOperation params = connector.getParameters();
-if ((desc.getReplicationSpec() != null) &&
-!desc.getReplicationSpec().allowEventReplacementInto(params)) {
-  LOG.debug("DDLTask: Alter Connector {} is skipped as connector is newer 
than update", dcName);
-  return 0; // no replacement, the existing connector state is newer than 
our update.
-}
-
-doAlteration(connector, params);
+// this call is to set the values from the alter descriptor onto the 
connector object
+doAlteration(connector);
 
+// This is the HMS metadata operation to modify the object
 context.getDb().alterDataConnector(connector.getName(), connector);
 return 0;
   }
 
-  protected abstract void doAlteration(DataConnector connector, Map params) throws HiveException;
+  protected abstract void doAlteration(DataConnector connector) throws 
HiveException;
 }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/owner/AlterDataConnectorSetOwnerAnalyzer.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/owner/AlterDataConnectorSetOwnerAnalyzer.java
index 1fb34e1..4634f82 100644
--- 

[hive] 37/38: HIVE-24396: Remaining comments from the feedback (Naveen Gangam)

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 34720cfaf3ed3e6dd371fe7394ce9e1495ffa3a8
Author: Naveen Gangam 
AuthorDate: Mon Mar 29 12:28:37 2021 -0400

HIVE-24396: Remaining comments from the feedback (Naveen Gangam)
---
 .../properties/AlterDataConnectorSetPropertiesAnalyzer.java |  5 ++---
 .../apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java   |  2 +-
 .../org/apache/hadoop/hive/metastore/cache/CachedStore.java |  6 --
 .../dataconnector/JDBCConnectorProviderFactory.java | 13 -
 .../dataconnector/jdbc/AbstractJDBCConnectorProvider.java   |  5 +++--
 5 files changed, 6 insertions(+), 25 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/properties/AlterDataConnectorSetPropertiesAnalyzer.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/properties/AlterDataConnectorSetPropertiesAnalyzer.java
index 1ddb73f..e8fb1b9 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/properties/AlterDataConnectorSetPropertiesAnalyzer.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/properties/AlterDataConnectorSetPropertiesAnalyzer.java
@@ -43,11 +43,10 @@ public class AlterDataConnectorSetPropertiesAnalyzer 
extends AbstractAlterDataCo
 Map dbProps = null;
 for (int i = 1; i < root.getChildCount(); i++) {
   ASTNode childNode = (ASTNode) root.getChild(i);
-  switch (childNode.getToken().getType()) {
-  case HiveParser.TOK_DATACONNECTORPROPERTIES:
+  if (childNode.getToken().getType() == 
HiveParser.TOK_DATACONNECTORPROPERTIES) {
 dbProps = getProps((ASTNode) childNode.getChild(0));
 break;
-  default:
+  } else {
 throw new SemanticException("Unrecognized token in ALTER CONNECTOR 
statement");
   }
 }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
index 3f58661..7f3b641 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
@@ -1729,7 +1729,7 @@ public abstract class BaseSemanticAnalyzer {
 try {
   connector = db.getDataConnector(dcName);
 } catch (Exception e) {
-  throw new SemanticException(e.getMessage(), e);
+  throw new SemanticException(e);
 }
 if (connector == null && throwException) {
   throw new 
SemanticException(ErrorMsg.DATACONNECTOR_NOT_EXISTS.getMsg(dcName));
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
index b792f78..253031d 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
@@ -1165,12 +1165,6 @@ public class CachedStore implements RawStore, 
Configurable {
 
   @Override public void createDataConnector(DataConnector connector) throws 
InvalidObjectException, MetaException {
 rawStore.createDataConnector(connector);
-// in case of event based cache update, cache will be updated during 
commit.
-/*
-if (!canUseEvents) {
-  sharedCache.addDatabaseToCache(connector);
-}
- */
   }
 
   @Override public DataConnector getDataConnector(String dcName) throws 
NoSuchObjectException {
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/JDBCConnectorProviderFactory.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/JDBCConnectorProviderFactory.java
index b1ebfe0..10ab2bf 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/JDBCConnectorProviderFactory.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/JDBCConnectorProviderFactory.java
@@ -14,19 +14,6 @@ public class JDBCConnectorProviderFactory {
 switch(connector.getType().toLowerCase()) {
 case MYSQL_TYPE:
   provider = new MySQLConnectorProvider(dbName, connector);
-  /*
-  try {
-Class.forName(driverClassName);
-handle = DriverManager.getConnection(jdbcUrl, username, password);
-isOpen = true;
-  } catch (ClassNotFoundException cnfe) {
-LOG.warn("Driver class not found in classpath:" + driverClassName);
-throw new RuntimeException("Driver class not found:" + 
driverClassName);
-  } catch (SQLException sqle) {
-LOG.warn("Could not connect to remote data source at " + jdbcUrl);
-   

[hive] 35/38: HIVE-24396: Changes from additional feedback from code review (Naveen Gangam)

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 53edee9d42af1cc09660e0bd63c44ac1f2b6d7c7
Author: Naveen Gangam 
AuthorDate: Fri Mar 26 10:00:07 2021 -0400

HIVE-24396: Changes from additional feedback from code review (Naveen 
Gangam)
---
 .../ddl/database/create/CreateDatabaseOperation.java   |  4 +++-
 .../ql/ddl/database/desc/DescDatabaseOperation.java| 18 +++---
 .../alter/AbstractAlterDataConnectorOperation.java |  4 ++--
 .../alter/url/AlterDataConnectorSetUrlOperation.java   | 11 +++
 .../show/ShowDataConnectorsOperation.java  |  2 +-
 .../java/org/apache/hadoop/hive/ql/metadata/Hive.java  |  8 ++--
 ql/src/test/queries/clientpositive/dataconnector.q |  6 +++---
 .../hadoop/hive/metastore/HiveMetaStoreClient.java |  2 +-
 .../apache/hadoop/hive/metastore/IMetaStoreClient.java |  2 +-
 .../jdbc/AbstractJDBCConnectorProvider.java|  4 +++-
 .../dataconnector/jdbc/DerbySQLConnectorProvider.java  |  2 +-
 .../dataconnector/jdbc/MySQLConnectorProvider.java |  2 +-
 .../hive/metastore/HiveMetaStoreClientPreCatalog.java  |  2 +-
 .../hadoop/hive/metastore/TestHiveMetaStore.java   | 10 +++---
 14 files changed, 36 insertions(+), 41 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
index d02b039..c4961b2 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
@@ -58,10 +58,12 @@ public class CreateDatabaseOperation extends 
DDLOperation {
 if 
(database.getLocationUri().equalsIgnoreCase(database.getManagedLocationUri())) {
   throw new HiveException("Managed and external locations for database 
cannot be the same");
 }
-  } else {
+  } else if (desc.getDatabaseType() == DatabaseType.REMOTE) {
 makeLocationQualified(database);
 database.setConnector_name(desc.getConnectorName());
 database.setRemote_dbname(desc.getRemoteDbName());
+  } else { // should never be here
+throw new HiveException("Unsupported database type " + 
database.getType() + " for " + database.getName());
   }
   context.getDb().createDatabase(database, desc.getIfNotExists());
 } catch (AlreadyExistsException ex) {
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseOperation.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseOperation.java
index 0a19ccc..332e36e 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseOperation.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseOperation.java
@@ -32,6 +32,8 @@ import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
 import org.apache.hadoop.hive.ql.ddl.ShowUtils;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
 
+import static org.apache.hadoop.hive.metastore.api.DatabaseType.NATIVE;
+
 /**
  * Operation process of describing a database.
  */
@@ -49,25 +51,27 @@ public class DescDatabaseOperation extends 
DDLOperation {
   }
 
   SortedMap params = null;
+  String location = "";
   if (desc.isExtended()) {
 params = new TreeMap<>(database.getParameters());
   }
 
-  String location = "";
   DescDatabaseFormatter formatter = 
DescDatabaseFormatter.getFormatter(context.getConf());
-  if (database.getType() == DatabaseType.NATIVE) {
+  switch(database.getType()) {
+  case NATIVE:
 location = database.getLocationUri();
 if (HiveConf.getBoolVar(context.getConf(), 
HiveConf.ConfVars.HIVE_IN_TEST)) {
   location = "location/in/test";
 }
-// database.setRemote_dbname("");
-// database.setConnector_name("");
-
 formatter.showDatabaseDescription(outStream, database.getName(), 
database.getDescription(), location,
-  database.getManagedLocationUri(), database.getOwnerName(), 
database.getOwnerType(), params, "", "");
-  } else {
+database.getManagedLocationUri(), database.getOwnerName(), 
database.getOwnerType(), params, "", "");
+break;
+  case REMOTE:
 formatter.showDatabaseDescription(outStream, database.getName(), 
database.getDescription(), "", "",
   database.getOwnerName(), database.getOwnerType(), params, 
database.getConnector_name(), database.getRemote_dbname());
+break;
+  default:
+  throw new HiveException("Unsupported database type " + 
database.getType() + " for " + database.getName());
   }
 } catch (Exception e) {
   throw new HiveException(e, ErrorMsg.GENERIC_ERROR);
diff --git 

[hive] 28/38: HIVE-24396: Database name for remote table should be set to hive dbname not the scoped dbname

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 2b4fa4e48f19f5c5ed6807ebcbe3d973b5398f88
Author: Naveen Gangam 
AuthorDate: Tue Dec 1 15:59:57 2020 -0500

HIVE-24396: Database name for remote table should be set to hive dbname not 
the scoped dbname
---
 .../hive/metastore/dataconnector/AbstractDataConnectorProvider.java  | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
index 64653b3..5375dfd 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
@@ -114,7 +114,6 @@ public abstract class AbstractDataConnectorProvider 
implements IDataConnectorPro
 Table table = new Table();
 table.setTableName(tableName);
 table.setTableType(TableType.EXTERNAL_TABLE.toString());
-table.setDbName(scoped_db);
 table.setSd(sd);
 // set table properties that subclasses can fill-in
 table.setParameters(new HashMap());


[hive] 33/38: HIVE-24396: Cleanup and one test failure (Naveen Gangam)

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 14283d3260898a1f901e603ccc8985231fc3fd16
Author: Naveen Gangam 
AuthorDate: Fri Mar 19 17:33:23 2021 -0400

HIVE-24396: Cleanup and one test failure (Naveen Gangam)
---
 .../hadoop/hive/ql/ddl/database/create/CreateDatabaseDesc.java | 7 ---
 .../java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java  | 2 +-
 2 files changed, 1 insertion(+), 8 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseDesc.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseDesc.java
index 1590133..f458cdc 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseDesc.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseDesc.java
@@ -103,13 +103,6 @@ public class CreateDatabaseDesc implements DDLDesc, 
Serializable {
   @Explain(displayName="database type")
   public DatabaseType getDatabaseType() {
 return dbType;
-/*
-if (dbType == DatabaseType.NATIVE)
-  return "NATIVE";
-else
-  return "REMOTE";
-
- */
   }
 
   @Explain(displayName="connector name")
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
index 05ae218..f8b337d 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
@@ -434,7 +434,7 @@ class MetaStoreDirectSql {
   }
   
db.setManagedLocationUri(MetastoreDirectSqlUtils.extractSqlString(dbline[8]));
   String dbType = MetastoreDirectSqlUtils.extractSqlString(dbline[9]);
-  if (dbType != null && dbType.equalsIgnoreCase("REMOTE")) {
+  if (dbType != null && 
dbType.equalsIgnoreCase(DatabaseType.REMOTE.name())) {
 db.setType(DatabaseType.REMOTE);
 
db.setConnector_name(MetastoreDirectSqlUtils.extractSqlString(dbline[10]));
 
db.setRemote_dbname(MetastoreDirectSqlUtils.extractSqlString(dbline[11]));


[hive] 31/38: HIVE-24396: Duplicate SQL statements in derby upgrade script (Naveen Gangam)

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 5f5ec66cf9fe17346e1419c08b28ffa372e453fe
Author: Naveen Gangam 
AuthorDate: Thu Mar 4 18:33:54 2021 -0500

HIVE-24396: Duplicate SQL statements in derby upgrade script (Naveen Gangam)
---
 .../metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql
 
b/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql
index a31e663..3dca262 100644
--- 
a/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql
+++ 
b/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql
@@ -148,7 +148,6 @@ ALTER TABLE COMPLETED_COMPACTIONS ADD CC_INITIATOR_VERSION 
varchar(128);
 ALTER TABLE COMPLETED_COMPACTIONS ADD CC_WORKER_VERSION varchar(128);
 
 -- HIVE-24396
-CREATE TABLE "APP"."DATACONNECTORS" ("DC_NAME" VARCHAR(128) NOT NULL, "TYPE" 
VARCHAR(128) NOT NULL, "COMMENT" VARCHAR(256), "OWNER_NAME" VARCHAR(256), 
"OWNER_TYPE" VARCHAR(10), "CREATE_TIME" INTEGER);
 CREATE TABLE "APP"."DATACONNECTORS" ("DC_NAME" VARCHAR(128) NOT NULL, "TYPE" 
VARCHAR(128) NOT NULL, "COMMENT" VARCHAR(256), "OWNER_NAME" VARCHAR(256), 
"OWNER_TYPE" VARCHAR(10), "CREATE_TIME" INTEGER NOT NULL);
 CREATE TABLE "APP"."DATACONNECTOR_PARAMS" ("DC_NAME" VARCHAR(128) NOT NULL, 
"PARAM_KEY" VARCHAR(180) NOT NULL, "PARAM_VALUE" VARCHAR(4000), "COMMENT" 
VARCHAR(256));
 ALTER TABLE "APP"."DBS" ADD COLUMN "TYPE" VARCHAR(32) DEFAULT 'NATIVE' NOT 
NULL;


[hive] 32/38: HIVE-24396: Incorporating feedback from the initial review (Naveen Gangam)

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit b9150e1c571bf050f879b29425b5a3b09e3a740a
Author: Naveen Gangam 
AuthorDate: Fri Mar 19 00:18:02 2021 -0400

HIVE-24396: Incorporating feedback from the initial review (Naveen Gangam)

Test fix for test failure, not to be committed
---
 .../hcatalog/listener/DummyRawStoreFailEvent.java  |  4 ++--
 .../org/apache/hadoop/hive/ql/parse/HiveParser.g   |  1 -
 .../database/create/CreateDatabaseAnalyzer.java|  8 +++
 .../ddl/database/desc/DescDatabaseFormatter.java   |  4 ++--
 .../apache/hadoop/hive/metastore/HMSHandler.java   |  3 ++-
 .../apache/hadoop/hive/metastore/ObjectStore.java  |  8 +++
 .../org/apache/hadoop/hive/metastore/RawStore.java |  4 ++--
 .../hadoop/hive/metastore/cache/CachedStore.java   |  4 ++--
 .../metastore/DummyRawStoreControlledCommit.java   | 22 ++---
 .../metastore/DummyRawStoreForJdoConnection.java   |  2 +-
 .../upgrade-3.1.3000-to-4.0.0.postgres.sql | 28 ++
 11 files changed, 48 insertions(+), 40 deletions(-)

diff --git 
a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
 
b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
index 2d7ad24..7d7360b 100644
--- 
a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
+++ 
b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
@@ -246,8 +246,8 @@ public class DummyRawStoreFailEvent implements RawStore, 
Configurable {
   }
 
   @Override
-  public List getAllDataConnectors() throws MetaException {
-return objectStore.getAllDataConnectors();
+  public List getAllDataConnectorNames() throws MetaException {
+return objectStore.getAllDataConnectorNames();
   }
 
   @Override
diff --git a/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 
b/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
index 1375e4a..2163ec9 100644
--- a/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
+++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
@@ -364,7 +364,6 @@ TOK_ALTERDATABASE_PROPERTIES;
 TOK_ALTERDATABASE_OWNER;
 TOK_ALTERDATABASE_LOCATION;
 TOK_ALTERDATABASE_MANAGEDLOCATION;
-TOK_DATACONNECTORPROPERTIES;
 TOK_ALTERDATACONNECTOR_PROPERTIES;
 TOK_ALTERDATACONNECTOR_OWNER;
 TOK_ALTERDATACONNECTOR_URL;
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseAnalyzer.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseAnalyzer.java
index d342db0..14ab9bf 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseAnalyzer.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseAnalyzer.java
@@ -49,7 +49,7 @@ public class CreateDatabaseAnalyzer extends 
BaseSemanticAnalyzer {
 String comment = null;
 String locationUri = null;
 String managedLocationUri = null;
-String type = "NATIVE";
+String type = DatabaseType.NATIVE.name();
 String connectorName = null;
 Map props = null;
 
@@ -74,12 +74,10 @@ public class CreateDatabaseAnalyzer extends 
BaseSemanticAnalyzer {
 outputs.add(toWriteEntity(managedLocationUri));
 break;
   case HiveParser.TOK_DATACONNECTOR:
-type = "REMOTE";
-// locationUri = "REMOTE_DATABASE"; // TODO
+type = DatabaseType.REMOTE.name();
 ASTNode nextNode = (ASTNode) root.getChild(i);
 connectorName = ((ASTNode)nextNode).getChild(0).getText();
 outputs.add(toWriteEntity(connectorName));
-// outputs.remove(toWriteEntity(locationUri));
 if (managedLocationUri != null) {
   outputs.remove(toWriteEntity(managedLocationUri));
   managedLocationUri = null;
@@ -92,7 +90,7 @@ public class CreateDatabaseAnalyzer extends 
BaseSemanticAnalyzer {
 
 CreateDatabaseDesc desc = null;
 Database database = new Database(databaseName, comment, locationUri, 
props);
-if (type.equalsIgnoreCase("NATIVE")) {
+if (type.equalsIgnoreCase(DatabaseType.NATIVE.name())) {
   desc = new CreateDatabaseDesc(databaseName, comment, locationUri, 
managedLocationUri, ifNotExists, props);
   database.setType(DatabaseType.NATIVE);
   // database = new Database(databaseName, comment, locationUri, props);
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseFormatter.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseFormatter.java
index 5e10c3b..327aa5c 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseFormatter.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseFormatter.java
@@ -71,10 +71,10 @@ abstract class 

[hive] 29/38: HIVE-24396: Fix for NPE in get_database_core with null catalog name

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 506621c86e9d0c9751c256ebdac78db75e0720ce
Author: Naveen Gangam 
AuthorDate: Tue Dec 1 22:26:42 2020 -0500

HIVE-24396: Fix for NPE in get_database_core with null catalog name
---
 .../main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java  | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
index 638b426..d66d928 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
@@ -2376,6 +2376,10 @@ public class HiveMetaStore extends ThriftHiveMetastore {
 + " is not a valid object name");
   }
 
+  if (!tbl.isSetCatName()) {
+tbl.setCatName(getDefaultCatalog(conf));
+  }
+
   Database db = get_database_core(tbl.getCatName(), tbl.getDbName());
   if (db != null && db.getType().equals(DatabaseType.REMOTE)) {
 
DataConnectorProviderFactory.getDataConnectorProvider(db).createTable(tbl);
@@ -2427,9 +2431,6 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   boolean success = false, madeDir = false;
   boolean isReplicated = false;
   try {
-if (!tbl.isSetCatName()) {
-  tbl.setCatName(getDefaultCatalog(conf));
-}
 firePreEvent(new PreCreateTableEvent(tbl, this));
 
 ms.openTransaction();


[hive] 26/38: HIVE-24396: get_table_core() to return null instead of exception

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 5c98e3073e52431bcece39cdfe7e655abf1917b0
Author: Naveen Gangam 
AuthorDate: Tue Dec 1 15:50:12 2020 -0500

HIVE-24396: get_table_core() to return null instead of exception
---
 .../org/apache/hadoop/hive/metastore/HiveMetaStore.java   | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
index 26552d1..638b426 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
@@ -3739,12 +3739,19 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   Table t = null;
   try {
 db = get_database_core(catName, dbname);
-if (db != null) {
-  if (db.getType().equals(DatabaseType.REMOTE)) {
-return 
DataConnectorProviderFactory.getDataConnectorProvider(db).getTable(name);
+  } catch (Exception e) { /* appears exception is not thrown currently if 
db doesnt exist */ }
+
+  if (db != null) {
+if (db.getType().equals(DatabaseType.REMOTE)) {
+  t = 
DataConnectorProviderFactory.getDataConnectorProvider(db).getTable(name);
+  if (t == null) {
+throw new NoSuchObjectException(TableName.getQualified(catName, 
dbname, name) +
+  " table not found");
   }
+  t.setDbName(dbname);
+  return t;
 }
-  } catch (Exception e) { /* appears exception is not thrown currently if 
db doesnt exist */ }
+  }
 
   try {
 t = getMS().getTable(catName, dbname, name, writeIdList);


[hive] 27/38: HIVE-24396: Fix in connector provider to return null instead of blank Table

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 4319d29a2e2c3cd19c2c4249b32675229c0f4c9c
Author: Naveen Gangam 
AuthorDate: Tue Dec 1 15:57:11 2020 -0500

HIVE-24396: Fix in connector provider to return null instead of blank Table
---
 .../metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
index 12ce799..4027555 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
@@ -173,7 +173,6 @@ public abstract class AbstractJDBCConnectorProvider extends 
AbstractDataConnecto
   // rs = fetchTableMetadata(tableName);
   rs = fetchTableViaDBMetaData(tableName);
   List cols = new ArrayList<>();
-  // TODO throw exception is RS is empty
   while (rs.next()) {
 FieldSchema fs = new FieldSchema();
 fs.setName(rs.getString("COLUMN_NAME"));
@@ -182,6 +181,11 @@ public abstract class AbstractJDBCConnectorProvider 
extends AbstractDataConnecto
 cols.add(fs);
   }
 
+  if (cols.size() == 0) {
+// table does not exists or could not be fetched
+return null;
+  }
+
   table = buildTableFromColsList(tableName, cols);
   //Setting the table properties.
   table.getParameters().put(JDBC_DATABASE_TYPE, this.type);


[hive] 24/38: HIVE-24396: Unhandled longvarchar and integer types for derby

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 3e41782d7c23e9e76a4e2f5f0876b4fd6e6b9a8d
Author: Naveen Gangam 
AuthorDate: Tue Dec 1 01:02:29 2020 -0500

HIVE-24396: Unhandled longvarchar and integer types for derby
---
 .../apache/hadoop/hive/metastore/HiveMetaStore.java   | 17 -
 .../dataconnector/jdbc/DerbySQLConnectorProvider.java | 19 ++-
 .../dataconnector/jdbc/MySQLConnectorProvider.java|  1 +
 .../jdbc/PostgreSQLConnectorProvider.java |  3 +++
 4 files changed, 30 insertions(+), 10 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
index 88261e2..7288ca3 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
@@ -2366,15 +2366,9 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   List processorCapabilities = req.getProcessorCapabilities();
   String processorId = req.getProcessorIdentifier();
 
-  Database db = null;
-  try {
-db = ms.getDatabase(tbl.getCatName(), tbl.getDbName());
-  } catch (Exception e) {
-LOG.info("Database {} does exist, exception: {}", tbl.getDbName(), 
e.getMessage());
-return;
-  }
+  Database db = get_database_core(tbl.getCatName(), tbl.getDbName());
   if (db != null && db.getType().equals(DatabaseType.REMOTE)) {
-boolean success = 
DataConnectorProviderFactory.getDataConnectorProvider(db).createTable(tbl);
+
DataConnectorProviderFactory.getDataConnectorProvider(db).createTable(tbl);
 return;
   }
 
@@ -4533,7 +4527,12 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   Database db = null;
   try {
 ms.openTransaction();
-db = ms.getDatabase(catName, dbName);
+try {
+  db = ms.getDatabase(catName, dbName);
+} catch (NoSuchObjectException notExists) {
+  throw new InvalidObjectException("Unable to add partitions because "
+  + "database or table " + dbName + "." + tblName + " does not 
exist");
+}
 if (db.getType() == DatabaseType.REMOTE)
   throw new MetaException("Operation add_partitions_pspec not 
supported on tables in REMOTE database");
 tbl = ms.getTable(catName, dbName, tblName, null);
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/DerbySQLConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/DerbySQLConnectorProvider.java
index 1cf90bc..c212098 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/DerbySQLConnectorProvider.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/DerbySQLConnectorProvider.java
@@ -1,5 +1,6 @@
 package org.apache.hadoop.hive.metastore.dataconnector.jdbc;
 
+import org.apache.hadoop.hive.metastore.ColumnType;
 import org.apache.hadoop.hive.metastore.api.DataConnector;
 import org.apache.hadoop.hive.metastore.api.MetaException;
 import org.apache.hadoop.hive.metastore.api.Table;
@@ -63,7 +64,23 @@ public class DerbySQLConnectorProvider extends 
AbstractJDBCConnectorProvider {
 
   protected String getDataType(String dbDataType, int size) {
 String mappedType = super.getDataType(dbDataType, size);
-// map any db specific types here. or return
+if (!mappedType.equalsIgnoreCase(ColumnType.VOID_TYPE_NAME)) {
+  return mappedType;
+}
+
+// map any db specific types here.
+switch (dbDataType.toLowerCase())
+{
+case "integer":
+  mappedType = ColumnType.INT_TYPE_NAME;
+  break;
+case "long varchar":
+  mappedType = ColumnType.STRING_TYPE_NAME;
+  break;
+default:
+  mappedType = ColumnType.VOID_TYPE_NAME;
+  break;
+}
 return mappedType;
   }
 }
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/MySQLConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/MySQLConnectorProvider.java
index cb80c4f..17d5a8b 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/MySQLConnectorProvider.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/MySQLConnectorProvider.java
@@ -80,6 +80,7 @@ public class 

[hive] 11/38: HIVE-24396: Added schema changes for Oracle Made DBS.TYPE NOT NULL in all scripts Added Type support to DatabaseBuilder Added Unit test for DataConnector Added Unit test REMOTE Database

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 7d91a9a297cf9409383a712dcb20bf55c0886b3d
Author: Naveen Gangam 
AuthorDate: Fri Nov 20 11:34:53 2020 -0500

HIVE-24396: Added schema changes for Oracle
Made DBS.TYPE NOT NULL in all scripts
Added Type support to DatabaseBuilder
Added Unit test for DataConnector
Added Unit test REMOTE Database
Fixed test failures in TestSchemaToolForMetaStore
---
 .../src/main/thrift/hive_metastore.thrift  |  12 +-
 .../metastore/client/builder/DatabaseBuilder.java  |  25 +++-
 .../src/main/sql/derby/hive-schema-4.0.0.derby.sql |   2 +-
 .../sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql |   3 +-
 .../src/main/sql/mysql/hive-schema-4.0.0.mysql.sql |   2 +-
 .../sql/mysql/upgrade-3.2.0-to-4.0.0.mysql.sql |   4 +-
 .../main/sql/oracle/hive-schema-4.0.0.oracle.sql   |  25 +++-
 .../sql/oracle/upgrade-3.2.0-to-4.0.0.oracle.sql   |  25 
 .../sql/postgres/hive-schema-4.0.0.postgres.sql|   2 +-
 .../postgres/upgrade-3.2.0-to-4.0.0.postgres.sql   |   4 +-
 .../hadoop/hive/metastore/TestHiveMetaStore.java   | 136 +
 .../schematool/TestSchemaToolForMetastore.java |  18 +--
 12 files changed, 233 insertions(+), 25 deletions(-)

diff --git 
a/standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift 
b/standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift
index a91a140..9e43101 100644
--- 
a/standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift
+++ 
b/standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift
@@ -919,12 +919,12 @@ struct GetPartitionsByNamesResult {
 }
 
 struct DataConnector {
-  1: string name
-  2: string type
-  3: string url
-  4: optional string description
-  5: optional map parameters
-  6: optional string ownerName
+  1: string name,
+  2: string type,
+  3: string url,
+  4: optional string description,
+  5: optional map parameters,
+  6: optional string ownerName,
   7: optional PrincipalType ownerType,
   8: optional i32 createTime
 }
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/client/builder/DatabaseBuilder.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/client/builder/DatabaseBuilder.java
index 806bf0f..8cd4b85 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/client/builder/DatabaseBuilder.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/client/builder/DatabaseBuilder.java
@@ -44,6 +44,7 @@ public class DatabaseBuilder {
   private PrincipalType ownerType;
   private int createTime;
   private DatabaseType type;
+  private String connectorName, remoteDBName;
 
   public DatabaseBuilder() {
   }
@@ -109,6 +110,16 @@ public class DatabaseBuilder {
 return this;
   }
 
+  public DatabaseBuilder setConnectorName(String connectorName) {
+this.connectorName = connectorName;
+return this;
+  }
+
+  public DatabaseBuilder setRemoteDBName(String remoteDBName) {
+this.remoteDBName = remoteDBName;
+return this;
+  }
+
   public Database build(Configuration conf) throws MetaException {
 if (name == null) throw new MetaException("You must name the database");
 if (catalogName == null) catalogName = 
MetaStoreUtils.getDefaultCatalog(conf);
@@ -122,7 +133,19 @@ public class DatabaseBuilder {
   db.setOwnerName(ownerName);
   if (ownerType == null) ownerType = PrincipalType.USER;
   db.setOwnerType(ownerType);
-  if (type == null) type = DatabaseType.NATIVE;
+  if (type == null) {
+type = DatabaseType.NATIVE;
+if (connectorName != null || remoteDBName != null) {
+  throw new MetaException("connector name or remoteDBName cannot be 
set for database of type NATIVE");
+}
+  } else if (type == DatabaseType.REMOTE) {
+if (connectorName == null)
+  throw new MetaException("connector name cannot be null for database 
of type REMOTE");
+db.setConnector_name(connectorName);
+if (remoteDBName != null) {
+  db.setRemote_dbname(remoteDBName);
+}
+  }
   db.setType(type);
   return db;
 } catch (IOException e) {
diff --git 
a/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0.derby.sql
 
b/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0.derby.sql
index f51b712..9e9e7ea 100644
--- 
a/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0.derby.sql
+++ 
b/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0.derby.sql
@@ -25,7 +25,7 @@ CREATE TABLE "APP"."DBS" (
   "CTLG_NAME" VARCHAR(256) NOT NULL DEFAULT 'hive',
   

[hive] 20/38: Retain case on table names during query processing

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 0b60db92c316c6a08e5808755ae99cb38ee238e4
Author: Naveen Gangam 
AuthorDate: Wed Nov 25 17:28:14 2020 -0500

Retain case on table names during query processing
---
 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
index 13a1bae..9bb6689 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
@@ -1117,7 +1117,7 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
 
 ASTNode tableTree = (ASTNode) (tabref.getChild(0));
 
-String tabIdName = getUnescapedName(tableTree).toLowerCase();
+String tabIdName = getUnescapedName(tableTree);
 
 String alias = findSimpleTableName(tabref, aliasIndex);
 


[hive] 13/38: HIVE-24396: Fix for drop database for remote databases

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 750449185cc8cff874d9ba70897b9a702c30726e
Author: Naveen Gangam 
AuthorDate: Fri Nov 20 20:32:53 2020 -0500

HIVE-24396: Fix for drop database for remote databases
---
 .../org/apache/hadoop/hive/metastore/HiveMetaStore.java | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
index 696b89d..2eeb60a 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
@@ -1713,6 +1713,10 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   try {
 ms.openTransaction();
 db = ms.getDatabase(catName, name);
+if (db.getType() == DatabaseType.REMOTE) {
+  success = drop_remote_database_core(ms, db);
+  return;
+}
 isReplicated = isDbReplicationTarget(db);
 
 if (!isInTest && ReplChangeManager.isSourceOfReplication(db)) {
@@ -1899,6 +1903,16 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   }
 }
 
+private boolean drop_remote_database_core(RawStore ms, final Database db) 
throws MetaException, NoSuchObjectException {
+  boolean success = false;
+  firePreEvent(new PreDropDatabaseEvent(db, this));
+
+  if (ms.dropDatabase(db.getCatalogName(), db.getName())) {
+success = ms.commitTransaction();
+  }
+  return success;
+}
+
 @Override
 public void drop_database(final String dbName, final boolean deleteData, 
final boolean cascade)
 throws NoSuchObjectException, InvalidOperationException, MetaException 
{
@@ -1983,14 +1997,12 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   }
 }
 
-// Assumes that the catalog has already been set.
 private void create_dataconnector_core(RawStore ms, final DataConnector 
connector)
 throws AlreadyExistsException, InvalidObjectException, MetaException {
   if (!MetaStoreUtils.validateName(connector.getName(), conf)) {
 throw new InvalidObjectException(connector.getName() + " is not a 
valid dataconnector name");
   }
 
-  // connector.setLocationUri(dbPath.toString());
   if (connector.getOwnerName() == null){
 try {
   connector.setOwnerName(SecurityUtils.getUGI().getShortUserName());
@@ -2001,7 +2013,6 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   long time = System.currentTimeMillis()/1000;
   connector.setCreateTime((int) time);
   boolean success = false;
-  boolean madeDir = false;
   Map transactionalListenersResponses = 
Collections.emptyMap();
   try {
 firePreEvent(new PreCreateDataConnectorEvent(connector, this));


[hive] 18/38: Build issue with EventMessage

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 23aafb738b8486cf0a14157355f5ab45c906e2de
Author: Naveen Gangam 
AuthorDate: Mon Nov 23 19:03:44 2020 -0500

Build issue with EventMessage
---
 .../java/org/apache/hadoop/hive/metastore/messaging/EventMessage.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/messaging/EventMessage.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/messaging/EventMessage.java
index c86466e..1b2fa57 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/messaging/EventMessage.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/messaging/EventMessage.java
@@ -68,7 +68,7 @@ public abstract class EventMessage {
 DELETE_TABLE_COLUMN_STAT(MessageBuilder.DELETE_TBL_COL_STAT_EVENT),
 UPDATE_PARTITION_COLUMN_STAT(MessageBuilder.UPDATE_PART_COL_STAT_EVENT),
 DELETE_PARTITION_COLUMN_STAT(MessageBuilder.DELETE_PART_COL_STAT_EVENT),
-COMMIT_COMPACTION(MessageBuilder.COMMIT_COMPACTION_EVENT);
+COMMIT_COMPACTION(MessageBuilder.COMMIT_COMPACTION_EVENT),
 CREATE_DATACONNECTOR(MessageBuilder.CREATE_DATACONNECTOR_EVENT),
 DROP_DATACONNECTOR(MessageBuilder.DROP_DATACONNECTOR_EVENT),
 ALTER_DATACONNECTOR(MessageBuilder.ALTER_DATACONNECTOR_EVENT);


[hive] 17/38: HIVE-24396: refactored code to Abstract class and providers share common code

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 3de6032b45890d46269d935f1e02d7bdab3f4cd1
Author: Naveen Gangam 
AuthorDate: Mon Nov 23 17:59:30 2020 -0500

HIVE-24396: refactored code to Abstract class and providers share common 
code
---
 .../JDBCConnectorProviderFactory.java  |  9 ++-
 .../jdbc/AbstractJDBCConnectorProvider.java| 13 ++--
 .../jdbc/DerbySQLConnectorProvider.java| 69 
 .../dataconnector/jdbc/MySQLConnectorProvider.java | 74 --
 .../jdbc/PostgreSQLConnectorProvider.java  | 71 -
 5 files changed, 102 insertions(+), 134 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/JDBCConnectorProviderFactory.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/JDBCConnectorProviderFactory.java
index 537fd2c..b1ebfe0 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/JDBCConnectorProviderFactory.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/JDBCConnectorProviderFactory.java
@@ -1,12 +1,11 @@
 package org.apache.hadoop.hive.metastore.dataconnector;
 
 import org.apache.hadoop.hive.metastore.api.DataConnector;
+import 
org.apache.hadoop.hive.metastore.dataconnector.jdbc.DerbySQLConnectorProvider;
 import 
org.apache.hadoop.hive.metastore.dataconnector.jdbc.MySQLConnectorProvider;
 import 
org.apache.hadoop.hive.metastore.dataconnector.jdbc.PostgreSQLConnectorProvider;
 
-import static 
org.apache.hadoop.hive.metastore.dataconnector.IDataConnectorProvider.MYSQL_TYPE;
-import static 
org.apache.hadoop.hive.metastore.dataconnector.IDataConnectorProvider.POSTGRES_TYPE;
-
+import static 
org.apache.hadoop.hive.metastore.dataconnector.IDataConnectorProvider.*;
 
 public class JDBCConnectorProviderFactory {
 
@@ -33,6 +32,10 @@ public class JDBCConnectorProviderFactory {
   provider = new PostgreSQLConnectorProvider(dbName, connector);
   break;
 
+case DERBY_TYPE:
+  provider = new DerbySQLConnectorProvider(dbName, connector);
+  break;
+
 default:
   throw new RuntimeException("Unsupported JDBC type");
 }
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
index 62a7786..12ce799 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
@@ -1,5 +1,6 @@
 package org.apache.hadoop.hive.metastore.dataconnector.jdbc;
 
+import org.apache.hadoop.hive.metastore.ColumnType;
 import org.apache.hadoop.hive.metastore.Warehouse;
 import org.apache.hadoop.hive.metastore.api.DataConnector;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
@@ -219,16 +220,13 @@ public abstract class AbstractJDBCConnectorProvider 
extends AbstractDataConnecto
 return rs;
   }
 
-  private String wrapSize(int size) {
+  protected String wrapSize(int size) {
 return "(" + size + ")";
   }
 
-  protected abstract String getDataType(String dbType, int size);
-
-  /*
-  private String getDataType(String mySqlType, int size) {
-//TODO: Geomentric, network, bit, array data types of postgresql needs to 
be supported.
-switch(mySqlType)
+  // subclasses call this first, anything that is not mappable by this code is 
mapped in the subclass
+  protected String getDataType(String mySqlType, int size) {
+switch(mySqlType.toLowerCase())
 {
 case "char":
   return ColumnType.CHAR_TYPE_NAME + wrapSize(size);
@@ -288,7 +286,6 @@ public abstract class AbstractJDBCConnectorProvider extends 
AbstractDataConnecto
   return ColumnType.VOID_TYPE_NAME;
 }
   }
- */
 
   @Override protected String getInputClass() {
 return JDBC_INPUTFORMAT_CLASS;
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/DerbySQLConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/DerbySQLConnectorProvider.java
new file mode 100644
index 000..1cf90bc
--- /dev/null
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/DerbySQLConnectorProvider.java
@@ -0,0 +1,69 @@
+package org.apache.hadoop.hive.metastore.dataconnector.jdbc;
+
+import 

[hive] 19/38: fix for 2 additional test failures

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit d66d5fc3480815fb39fad33f484a6a03119e2bf3
Author: Naveen Gangam 
AuthorDate: Tue Nov 24 00:16:49 2020 -0500

fix for 2 additional test failures
---
 .../plugin/sqlstd/Operation2Privilege.java |   8 +
 .../results/clientpositive/dataconnector.q.out | 205 -
 2 files changed, 8 insertions(+), 205 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
index 3a10a06..2b9c7ab 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
@@ -442,6 +442,14 @@ public class Operation2Privilege {
 op2Priv.put(HiveOperationType.CREATE_MAPPING, 
PrivRequirement.newIOPrivRequirement(null, null));
 op2Priv.put(HiveOperationType.ALTER_MAPPING, 
PrivRequirement.newIOPrivRequirement(null, null));
 op2Priv.put(HiveOperationType.DROP_MAPPING, 
PrivRequirement.newIOPrivRequirement(null, null));
+
+op2Priv.put(HiveOperationType.CREATEDATACONNECTOR, 
PrivRequirement.newIOPrivRequirement(null, ADMIN_PRIV_AR));
+op2Priv.put(HiveOperationType.DROPDATACONNECTOR, 
PrivRequirement.newIOPrivRequirement(null, ADMIN_PRIV_AR));
+op2Priv.put(HiveOperationType.ALTERDATACONNECTOR, 
PrivRequirement.newIOPrivRequirement(null, ADMIN_PRIV_AR));
+op2Priv.put(HiveOperationType.ALTERDATACONNECTOR_OWNER, 
PrivRequirement.newIOPrivRequirement(null, ADMIN_PRIV_AR));
+op2Priv.put(HiveOperationType.ALTERDATACONNECTOR_URL, 
PrivRequirement.newIOPrivRequirement(null, ADMIN_PRIV_AR));
+op2Priv.put(HiveOperationType.DESCDATACONNECTOR, 
PrivRequirement.newIOPrivRequirement(null, null));
+op2Priv.put(HiveOperationType.SHOWDATACONNECTORS, 
PrivRequirement.newIOPrivRequirement(null, null));
   }
 
   /**
diff --git a/ql/src/test/results/clientpositive/dataconnector.q.out 
b/ql/src/test/results/clientpositive/dataconnector.q.out
deleted file mode 100644
index 8b678c2..000
--- a/ql/src/test/results/clientpositive/dataconnector.q.out
+++ /dev/null
@@ -1,205 +0,0 @@
-PREHOOK: query: SHOW CONNECTORS
-PREHOOK: type: SHOWDATACONNECTORS
-POSTHOOK: query: SHOW CONNECTORS
-POSTHOOK: type: SHOWDATACONNECTORS
-PREHOOK: query: CREATE CONNECTOR mysql_test
-TYPE 'mysql'
-URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
-COMMENT 'test connector'
-WITH DCPROPERTIES (
-"hive.sql.dbcp.username"="hive1",
-"hive.sql.dbcp.password"="hive1")
-PREHOOK: type: CREATEDATACONNECTOR
-PREHOOK: Output: connector:mysql_test
-POSTHOOK: query: CREATE CONNECTOR mysql_test
-TYPE 'mysql'
-URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
-COMMENT 'test connector'
-WITH DCPROPERTIES (
-"hive.sql.dbcp.username"="hive1",
-"hive.sql.dbcp.password"="hive1")
-POSTHOOK: type: CREATEDATACONNECTOR
-POSTHOOK: Output: connector:mysql_test
-PREHOOK: query: SHOW CONNECTORS
-PREHOOK: type: SHOWDATACONNECTORS
-POSTHOOK: query: SHOW CONNECTORS
-POSTHOOK: type: SHOWDATACONNECTORS
-mysql_test
-PREHOOK: query: CREATE CONNECTOR IF NOT EXISTS mysql_test
-TYPE 'mysql'
-URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
-COMMENT 'test connector'
-WITH DCPROPERTIES (
-"hive.sql.dbcp.username"="hive1",
-"hive.sql.dbcp.password"="hive1")
-PREHOOK: type: CREATEDATACONNECTOR
-PREHOOK: Output: connector:mysql_test
-POSTHOOK: query: CREATE CONNECTOR IF NOT EXISTS mysql_test
-TYPE 'mysql'
-URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
-COMMENT 'test connector'
-WITH DCPROPERTIES (
-"hive.sql.dbcp.username"="hive1",
-"hive.sql.dbcp.password"="hive1")
-POSTHOOK: type: CREATEDATACONNECTOR
-POSTHOOK: Output: connector:mysql_test
-PREHOOK: query: SHOW CONNECTORS
-PREHOOK: type: SHOWDATACONNECTORS
-POSTHOOK: query: SHOW CONNECTORS
-POSTHOOK: type: SHOWDATACONNECTORS
-mysql_test
-PREHOOK: query: CREATE CONNECTOR IF NOT EXISTS derby_test
-TYPE 'derby'
- A masked pattern was here 
-COMMENT 'test derby connector'
-WITH DCPROPERTIES (
-"hive.sql.dbcp.username"="APP",
-"hive.sql.dbcp.password"="mine")
-PREHOOK: type: CREATEDATACONNECTOR
-PREHOOK: Output: connector:derby_test
-POSTHOOK: query: CREATE CONNECTOR IF NOT EXISTS derby_test
-TYPE 'derby'
- A masked pattern was here 
-COMMENT 'test derby connector'
-WITH DCPROPERTIES (
-"hive.sql.dbcp.username"="APP",
-"hive.sql.dbcp.password"="mine")
-POSTHOOK: type: CREATEDATACONNECTOR
-POSTHOOK: Output: connector:derby_test
-PREHOOK: query: DROP CONNECTOR mysql_test
-PREHOOK: type: DROPDATACONNECTOR
-PREHOOK: Input: connector:mysql_test
-PREHOOK: Output: connector:mysql_test
-POSTHOOK: query: DROP CONNECTOR mysql_test
-POSTHOOK: type: DROPDATACONNECTOR
-POSTHOOK: Input: connector:mysql_test

[hive] 12/38: Missed change from the rebase

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 6369daa9396d567d896ee2c6057a706a690a0cd9
Author: Naveen Gangam 
AuthorDate: Fri Nov 20 12:30:09 2020 -0500

Missed change from the rebase
---
 .../org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
index 39aae11..b7d826c 100644
--- 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
+++ 
b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
@@ -53,7 +53,7 @@ import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.apache.hadoop.hive.metastore.api.FileMetadataExprType;
 import org.apache.hadoop.hive.metastore.api.Function;
 import org.apache.hadoop.hive.metastore.api.GetPartitionsFilterSpec;
-import org.apache.hadoop.hive.metastore.api.GetPartitionsProjectionSpec;
+import org.apache.hadoop.hive.metastore.api.GetProjectionsSpec;
 import org.apache.hadoop.hive.metastore.api.HiveObjectPrivilege;
 import org.apache.hadoop.hive.metastore.api.HiveObjectRef;
 import org.apache.hadoop.hive.metastore.api.ISchema;


[hive] 15/38: HIVE-24396: Follow up test failure fixes

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit df8cb1146163acf3166382739e18522c3df2448a
Author: Naveen Gangam 
AuthorDate: Mon Nov 23 15:54:56 2020 -0500

HIVE-24396: Follow up test failure fixes
---
 .../hadoop/hive/ql/parse/IdentifiersParser.g   |   2 +-
 .../authorization/plugin/HiveOperationType.java|   7 +
 .../llap/alter_change_db_location.q.out|   3 +-
 .../clientpositive/llap/database_location.q.out|  47 ++---
 .../clientpositive/llap/dataconnector.q.out| 205 +
 .../clientpositive/llap/db_ddl_explain.q.out   |   2 +-
 .../clientpositive/llap/unicode_comments.q.out |   2 +-
 7 files changed, 236 insertions(+), 32 deletions(-)

diff --git 
a/parser/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 
b/parser/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
index d836b80..cbd45d3 100644
--- a/parser/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
+++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
@@ -927,7 +927,7 @@ nonReserved
 | KW_SHOW | KW_SHOW_DATABASE | KW_SKEWED | KW_SORT | KW_SORTED | KW_SSL | 
KW_STATISTICS | KW_STORED | KW_AST
 | KW_STREAMTABLE | KW_STRING | KW_STRUCT | KW_TABLES | KW_TBLPROPERTIES | 
KW_TEMPORARY | KW_TERMINATED
 | KW_TINYINT | KW_TOUCH | KW_TRANSACTIONAL | KW_TRANSACTIONS | KW_TYPE | 
KW_UNARCHIVE | KW_UNDO | KW_UNIONTYPE | KW_UNLOCK | KW_UNSET
-| KW_UNSIGNED | KW_URI | KW_USE | KW_UTC | KW_UTCTIMESTAMP | KW_VALUE_TYPE 
| KW_VIEW | KW_WEEK | KW_WHILE | KW_YEAR
+| KW_UNSIGNED | KW_URI | KW_URL | KW_USE | KW_UTC | KW_UTCTIMESTAMP | 
KW_VALUE_TYPE | KW_VIEW | KW_WEEK | KW_WHILE | KW_YEAR
 | KW_WORK
 | KW_TRANSACTION
 | KW_WRITE
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java
index 2511e0b..0d88199 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java
@@ -34,7 +34,9 @@ public enum HiveOperationType {
   REPLLOAD,
   REPLSTATUS,
   CREATEDATABASE,
+  CREATEDATACONNECTOR,
   DROPDATABASE,
+  DROPDATACONNECTOR,
   SWITCHDATABASE,
   LOCKDB,
   UNLOCKDB,
@@ -71,6 +73,7 @@ public enum HiveOperationType {
   ALTERTABLE_UPDATETABLESTATS,
   ALTERTABLE_UPDATEPARTSTATS,
   SHOWDATABASES,
+  SHOWDATACONNECTORS,
   SHOWTABLES,
   SHOWCOLUMNS,
   SHOW_TABLESTATUS,
@@ -118,7 +121,11 @@ public enum HiveOperationType {
   ALTERDATABASE,
   ALTERDATABASE_OWNER,
   ALTERDATABASE_LOCATION,
+  ALTERDATACONNECTOR,
+  ALTERDATACONNECTOR_OWNER,
+  ALTERDATACONNECTOR_URL,
   DESCDATABASE,
+  DESCDATACONNECTOR,
   ALTERTABLE_MERGEFILES,
   ALTERPARTITION_MERGEFILES,
   ALTERTABLE_SKEWED,
diff --git 
a/ql/src/test/results/clientpositive/llap/alter_change_db_location.q.out 
b/ql/src/test/results/clientpositive/llap/alter_change_db_location.q.out
index 2469cea..d07eaa6 100644
--- a/ql/src/test/results/clientpositive/llap/alter_change_db_location.q.out
+++ b/ql/src/test/results/clientpositive/llap/alter_change_db_location.q.out
@@ -1,11 +1,10 @@
  A masked pattern was here 
 PREHOOK: type: CREATEDATABASE
 PREHOOK: Output: database:newDB
-PREHOOK: Output: hdfs://### HDFS PATH ###
  A masked pattern was here 
 POSTHOOK: type: CREATEDATABASE
 POSTHOOK: Output: database:newDB
-POSTHOOK: Output: hdfs://### HDFS PATH ###
+ A masked pattern was here 
 PREHOOK: query: describe database extended newDB
 PREHOOK: type: DESCDATABASE
 PREHOOK: Input: database:newdb
diff --git a/ql/src/test/results/clientpositive/llap/database_location.q.out 
b/ql/src/test/results/clientpositive/llap/database_location.q.out
index b998e7e..1cb5127 100644
--- a/ql/src/test/results/clientpositive/llap/database_location.q.out
+++ b/ql/src/test/results/clientpositive/llap/database_location.q.out
@@ -72,13 +72,13 @@ COMMENT 'database 2'
  A masked pattern was here 
 PREHOOK: type: CREATEDATABASE
 PREHOOK: Output: database:db2
-PREHOOK: Output: hdfs://### HDFS PATH ###
+ A masked pattern was here 
 POSTHOOK: query: CREATE DATABASE db2
 COMMENT 'database 2'
  A masked pattern was here 
 POSTHOOK: type: CREATEDATABASE
 POSTHOOK: Output: database:db2
-POSTHOOK: Output: hdfs://### HDFS PATH ###
+ A masked pattern was here 
 PREHOOK: query: DESCRIBE DATABASE EXTENDED db2
 PREHOOK: type: DESCDATABASE
 PREHOOK: Input: database:db2
@@ -147,15 +147,13 @@ CREATE DATABASE db3
  A masked pattern was here 
 PREHOOK: type: CREATEDATABASE
 PREHOOK: Output: database:db3
-PREHOOK: Output: hdfs://### HDFS PATH ###
-PREHOOK: Output: hdfs://### HDFS PATH ###
+ A masked pattern was here 
 POSTHOOK: query: EXPLAIN
 

[hive] 16/38: Test failures with tez driver and duplicate error codes

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit a2a592f40ea58c6136396e9cdbbb38875bc17353
Author: Naveen Gangam 
AuthorDate: Mon Nov 23 17:43:26 2020 -0500

Test failures with tez driver and duplicate error codes
---
 ql/src/test/results/clientpositive/tez/explainanalyze_3.q.out | 2 +-
 ql/src/test/results/clientpositive/tez/explainuser_3.q.out| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/ql/src/test/results/clientpositive/tez/explainanalyze_3.q.out 
b/ql/src/test/results/clientpositive/tez/explainanalyze_3.q.out
index 51ab48b..9add485 100644
--- a/ql/src/test/results/clientpositive/tez/explainanalyze_3.q.out
+++ b/ql/src/test/results/clientpositive/tez/explainanalyze_3.q.out
@@ -127,7 +127,7 @@ PREHOOK: Input: database:newdb
 POSTHOOK: query: describe database extended newDB
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:newdb
-newdb  location/in/testhive_test_user  USER
+newdb  location/in/testhive_test_user  USER

 PREHOOK: query: use newDB
 PREHOOK: type: SWITCHDATABASE
 PREHOOK: Input: database:newdb
diff --git a/ql/src/test/results/clientpositive/tez/explainuser_3.q.out 
b/ql/src/test/results/clientpositive/tez/explainuser_3.q.out
index 4234d2d..990d1bd 100644
--- a/ql/src/test/results/clientpositive/tez/explainuser_3.q.out
+++ b/ql/src/test/results/clientpositive/tez/explainuser_3.q.out
@@ -145,7 +145,7 @@ PREHOOK: Input: database:newdb
 POSTHOOK: query: describe database extended newDB
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:newdb
-newdb  location/in/testhive_test_user  USER
+newdb  location/in/testhive_test_user  USER

 PREHOOK: query: explain use newDB
 PREHOOK: type: SWITCHDATABASE
 PREHOOK: Input: database:newdb


[hive] 07/38: NullPointerException in CreateDatabaseOperation due to last change

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit c6ed378164f352ed620b5e9647ca6ad2e28a0d7d
Author: Naveen Gangam 
AuthorDate: Tue Nov 17 13:16:46 2020 -0500

NullPointerException in CreateDatabaseOperation due to last change
---
 .../hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
index def742d..d02b039 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
@@ -54,14 +54,15 @@ public class CreateDatabaseOperation extends 
DDLOperation {
 if (desc.getManagedLocationUri() != null) {
   database.setManagedLocationUri(desc.getManagedLocationUri());
 }
+makeLocationQualified(database);
 if 
(database.getLocationUri().equalsIgnoreCase(database.getManagedLocationUri())) {
   throw new HiveException("Managed and external locations for database 
cannot be the same");
 }
   } else {
+makeLocationQualified(database);
 database.setConnector_name(desc.getConnectorName());
 database.setRemote_dbname(desc.getRemoteDbName());
   }
-  makeLocationQualified(database);
   context.getDb().createDatabase(database, desc.getIfNotExists());
 } catch (AlreadyExistsException ex) {
   //it would be better if AlreadyExistsException had an errorCode field


[hive] 09/38: Adding schema changes for mysql and postgres as well

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit b30173d4c27d9c6b44913c3c8f1f766b36990189
Author: Naveen Gangam 
AuthorDate: Wed Nov 18 13:37:31 2020 -0500

Adding schema changes for mysql and postgres as well
---
 .../src/main/sql/derby/hive-schema-4.0.0.derby.sql |  3 ++-
 .../sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql |  2 ++
 .../src/main/sql/mysql/hive-schema-4.0.0.mysql.sql | 24 
 .../sql/mysql/upgrade-3.2.0-to-4.0.0.mysql.sql | 26 ++
 .../sql/postgres/hive-schema-4.0.0.postgres.sql| 25 -
 .../postgres/upgrade-3.2.0-to-4.0.0.postgres.sql   | 25 +
 6 files changed, 103 insertions(+), 2 deletions(-)

diff --git 
a/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0.derby.sql
 
b/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0.derby.sql
index 7829f0f..f51b712 100644
--- 
a/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0.derby.sql
+++ 
b/standalone-metastore/metastore-server/src/main/sql/derby/hive-schema-4.0.0.derby.sql
@@ -818,7 +818,8 @@ CREATE TABLE "APP"."STORED_PROCS" (
 
 CREATE UNIQUE INDEX "UNIQUESTOREDPROC" ON "STORED_PROCS" ("NAME", "DB_ID");
 ALTER TABLE "STORED_PROCS" ADD CONSTRAINT "STOREDPROC_FK1" FOREIGN KEY 
("DB_ID") REFERENCES "DBS" ("DB_ID");
-CREATE TABLE "APP"."DATACONNECTORS" ("NAME" VARCHAR(128) NOT NULL, "TYPE" 
VARCHAR(32) NOT NULL, "URL" VARCHAR(4000) NOT NULL, "COMMENT" VARCHAR(256), 
"OWNER_NAME" VARCHAR(256), "OWNER_TYPE" VARCHAR(10), "CREATE_TIME" INTEGER);
+
+CREATE TABLE "APP"."DATACONNECTORS" ("NAME" VARCHAR(128) NOT NULL, "TYPE" 
VARCHAR(32) NOT NULL, "URL" VARCHAR(4000) NOT NULL, "COMMENT" VARCHAR(256), 
"OWNER_NAME" VARCHAR(256), "OWNER_TYPE" VARCHAR(10), "CREATE_TIME" INTEGER NOT 
NULL);
 CREATE TABLE "APP"."DATACONNECTOR_PARAMS" ("NAME" VARCHAR(128) NOT NULL, 
"PARAM_KEY" VARCHAR(180) NOT NULL, "PARAM_VALUE" VARCHAR(4000));
 ALTER TABLE "APP"."DATACONNECTORS" ADD CONSTRAINT "DATACONNECTORS_KEY_PK" 
PRIMARY KEY ("NAME");
 ALTER TABLE "APP"."DATACONNECTOR_PARAMS" ADD CONSTRAINT 
"DATACONNECTOR_PARAMS_KEY_PK" PRIMARY KEY ("NAME", "PARAM_KEY");
diff --git 
a/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql
 
b/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql
index 0bc9003..ed37748 100644
--- 
a/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql
+++ 
b/standalone-metastore/metastore-server/src/main/sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql
@@ -149,9 +149,11 @@ ALTER TABLE COMPLETED_COMPACTIONS ADD CC_WORKER_VERSION 
varchar(128);
 
 -- HIVE-24396
 CREATE TABLE "APP"."DATACONNECTORS" ("DC_NAME" VARCHAR(128) NOT NULL, "TYPE" 
VARCHAR(128) NOT NULL, "COMMENT" VARCHAR(256), "OWNER_NAME" VARCHAR(256), 
"OWNER_TYPE" VARCHAR(10), "CREATE_TIME" INTEGER);
+CREATE TABLE "APP"."DATACONNECTORS" ("DC_NAME" VARCHAR(128) NOT NULL, "TYPE" 
VARCHAR(128) NOT NULL, "COMMENT" VARCHAR(256), "OWNER_NAME" VARCHAR(256), 
"OWNER_TYPE" VARCHAR(10), "CREATE_TIME" INTEGER NOT NULL);
 CREATE TABLE "APP"."DATACONNECTOR_PARAMS" ("DC_NAME" VARCHAR(128) NOT NULL, 
"PARAM_KEY" VARCHAR(180) NOT NULL, "PARAM_VALUE" VARCHAR(4000), "COMMENT" 
VARCHAR(256));
 ALTER TABLE "APP"."DBS" ADD COLUMN "TYPE" VARCHAR(32) DEFAULT 'NATIVE';
 ALTER TABLE "APP"."DBS" ADD COLUMN "DATACONNECTOR_NAME" VARCHAR(128);
+UPDATE "APP"."DBS" SET TYPE='NATIVE' WHERE TYPE IS NULL;
 ALTER TABLE "APP"."DATACONNECTORS" ADD CONSTRAINT "DATACONNECTORS_KEY_PK" 
PRIMARY KEY ("DC_NAME");
 ALTER TABLE "APP"."DATACONNECTOR_PARAMS" ADD CONSTRAINT 
"DATACONNECTOR_PARAMS_KEY_PK" PRIMARY KEY ("DC_NAME", "PARAM_KEY");
 ALTER TABLE "APP"."DATACONNECTOR_PARAMS" ADD CONSTRAINT "DC_NAME_FK1" FOREIGN 
KEY ("DC_NAME") REFERENCES "APP"."DATACONNECTORS" ("DC_NAME") ON DELETE NO 
ACTION ON UPDATE NO ACTION;
diff --git 
a/standalone-metastore/metastore-server/src/main/sql/mysql/hive-schema-4.0.0.mysql.sql
 
b/standalone-metastore/metastore-server/src/main/sql/mysql/hive-schema-4.0.0.mysql.sql
index 8744319..48e6f7d 100644
--- 
a/standalone-metastore/metastore-server/src/main/sql/mysql/hive-schema-4.0.0.mysql.sql
+++ 
b/standalone-metastore/metastore-server/src/main/sql/mysql/hive-schema-4.0.0.mysql.sql
@@ -105,6 +105,9 @@ CREATE TABLE IF NOT EXISTS `DBS` (
   `CTLG_NAME` varchar(256) NOT NULL DEFAULT 'hive',
   `CREATE_TIME` INT(11),
   `DB_MANAGED_LOCATION_URI` varchar(4000) CHARACTER SET latin1 COLLATE 
latin1_bin,
+  `TYPE` VARCHAR(32) DEFAULT 'NATIVE',
+  `DATACONNECTOR_NAME` VARCHAR(128),
+  `REMOTE_DBNAME` VARCHAR(128),
   PRIMARY KEY (`DB_ID`),
   UNIQUE KEY `UNIQUE_DATABASE` (`NAME`, `CTLG_NAME`),
   CONSTRAINT `CTLG_FK1` FOREIGN KEY (`CTLG_NAME`) REFERENCES `CTLGS` (`NAME`)
@@ -1304,6 +1307,27 @@ CREATE TABLE PACKAGES (
 CREATE 

[hive] 10/38: HIVE-24396: getTable/getTables API not expected to throw NoSuchObjectException

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 1523cd47d0c81d5e2bb9f0b88d96efd847a67751
Author: Naveen Gangam 
AuthorDate: Thu Nov 19 00:00:02 2020 -0500

HIVE-24396: getTable/getTables API not expected to throw 
NoSuchObjectException
---
 .../apache/hadoop/hive/metastore/HiveMetaStoreClient.java   | 10 +-
 .../org/apache/hadoop/hive/metastore/HiveMetaStore.java | 13 ++---
 .../hive/metastore/client/builder/DatabaseBuilder.java  | 10 ++
 .../apache/hadoop/hive/metastore/cache/TestCachedStore.java |  1 +
 4 files changed, 22 insertions(+), 12 deletions(-)

diff --git 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
index 8700bed..9dbaacc 100644
--- 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
+++ 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
@@ -1097,7 +1097,7 @@ public class HiveMetaStoreClient implements 
IMetaStoreClient, AutoCloseable {
   /**
* Create a new Database
*
-   * @param connector
+   * @param db
* @throws AlreadyExistsException
* @throws InvalidObjectException
* @throws MetaException
@@ -1105,12 +1105,12 @@ public class HiveMetaStoreClient implements 
IMetaStoreClient, AutoCloseable {
* @see 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore.Iface#create_database(Database)
*/
   @Override
-  public void createDatabase(Database connector)
+  public void createDatabase(Database db)
   throws AlreadyExistsException, InvalidObjectException, MetaException, 
TException {
-if (!connector.isSetCatalogName()) {
-  connector.setCatalogName(getDefaultCatalog(conf));
+if (!db.isSetCatalogName()) {
+  db.setCatalogName(getDefaultCatalog(conf));
 }
-client.create_database(connector);
+client.create_database(db);
   }
 
   /**
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
index ec0fa00..696b89d 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
@@ -3733,9 +3733,7 @@ public class HiveMetaStore extends ThriftHiveMetastore {
 return 
DataConnectorProviderFactory.getDataConnectorProvider(db).getTable(name);
   }
 }
-  } catch (NoSuchObjectException notExists) {
-throw new MetaException("Database could not be found:" + dbname);
-  }
+  } catch (Exception e) { /* appears exception is not thrown currently if 
db doesnt exist */ }
 
   try {
 t = getMS().getTable(catName, dbname, name, writeIdList);
@@ -6003,9 +6001,7 @@ public class HiveMetaStore extends ThriftHiveMetastore {
 return 
DataConnectorProviderFactory.getDataConnectorProvider(db).getTableNames();
   }
 }
-  } catch (NoSuchObjectException notExists) {
-throw new MetaException("Database could not be found:" + dbname);
-  }
+  } catch (Exception e) { /* appears we return empty set instead of 
throwing an exception */ }
 
   try {
 ret = getMS().getTables(parsedDbName[CAT_NAME], parsedDbName[DB_NAME], 
pattern);
@@ -6065,10 +6061,13 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   try {
 db = get_database_core(catName, dbname);
 if (db != null) {
-  if(db.getType().equals(DatabaseType.REMOTE)) {
+  if (db.getType().equals(DatabaseType.REMOTE)) {
 return 
DataConnectorProviderFactory.getDataConnectorProvider(db).getTableNames();
   }
 }
+  } catch (Exception e) { /* ignore */ }
+
+  try {
 ret = getMS().getTables(catName, dbname, pattern, 
TableType.valueOf(tableType), -1);
   } catch (MetaException e) {
 ex = e;
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/client/builder/DatabaseBuilder.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/client/builder/DatabaseBuilder.java
index 21e3a9f..806bf0f 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/client/builder/DatabaseBuilder.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/client/builder/DatabaseBuilder.java
@@ -22,6 +22,7 @@ import org.apache.hadoop.hive.metastore.IMetaStoreClient;
 import 

[hive] 08/38: Adding a qtest and fixing type for default db

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 013a693649c82ee995f750910a360d8e6de4c0f7
Author: Naveen Gangam 
AuthorDate: Tue Nov 17 23:46:50 2020 -0500

Adding a qtest and fixing type for default db
---
 ql/src/test/queries/clientpositive/dataconnector.q |  71 +++
 .../results/clientpositive/dataconnector.q.out | 205 +
 .../hadoop/hive/metastore/HiveMetaStore.java   |   2 +-
 .../apache/hadoop/hive/metastore/ObjectStore.java  |   4 +-
 .../DataConnectorProviderFactory.java  |   7 +-
 5 files changed, 283 insertions(+), 6 deletions(-)

diff --git a/ql/src/test/queries/clientpositive/dataconnector.q 
b/ql/src/test/queries/clientpositive/dataconnector.q
new file mode 100644
index 000..c21524a
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/dataconnector.q
@@ -0,0 +1,71 @@
+-- SORT_QUERY_RESULTS
+SHOW CONNECTORS;
+
+-- CREATE with comment
+CREATE CONNECTOR mysql_test
+TYPE 'mysql'
+URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
+COMMENT 'test connector'
+WITH DCPROPERTIES (
+"hive.sql.dbcp.username"="hive1",
+"hive.sql.dbcp.password"="hive1");
+SHOW CONNECTORS;
+
+-- CREATE INE already exists
+CREATE CONNECTOR IF NOT EXISTS mysql_test
+TYPE 'mysql'
+URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
+COMMENT 'test connector'
+WITH DCPROPERTIES (
+"hive.sql.dbcp.username"="hive1",
+"hive.sql.dbcp.password"="hive1");
+SHOW CONNECTORS;
+
+-- CREATE INE already exists
+CREATE CONNECTOR IF NOT EXISTS derby_test
+TYPE 'derby'
+URL 'jdbc:derby:./target/tmp/junit_metastore_db;create=true'
+COMMENT 'test derby connector'
+WITH DCPROPERTIES (
+"hive.sql.dbcp.username"="APP",
+"hive.sql.dbcp.password"="mine");
+
+-- DROP
+DROP CONNECTOR mysql_test;
+SHOW CONNECTORS;
+
+-- DROP IE exists
+DROP CONNECTOR IF EXISTS mysql_test;
+SHOW CONNECTORS;
+
+-- recreate with same name
+CREATE CONNECTOR mysql_test
+TYPE 'mysql'
+URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
+COMMENT 'test connector'
+WITH DCPROPERTIES (
+"hive.sql.dbcp.username"="hive1",
+"hive.sql.dbcp.password"="hive1");
+SHOW CONNECTORS;
+
+CREATE REMOTE DATABASE db_derby USING derby_test with 
DBPROPERTIES("connector.remoteDbName"="APP");
+SHOW DATABASES;
+USE db_derby;
+SHOW TABLES;
+
+-- alter connector set URL
+alter connector mysql_test set URL 
'jdbc:mysql://nightly1.apache.org:3306/hive2';
+DESCRIBE CONNECTOR extended mysql_test;
+
+-- alter connector set DCPROPERTIES
+alter connector mysql_test set 
DCPROPERTIES("hive.sql.dbcp.username"="hive2","hive.sql.dbcp.password"="hive2");
+DESCRIBE CONNECTOR extended mysql_test;
+
+-- alter connector set owner
+alter connector mysql_test set OWNER USER newuser;
+DESCRIBE CONNECTOR extended mysql_test;
+
+DROP DATABASE db_derby;
+SHOW DATABASES;
+DROP CONNECTOR mysql_test;
+SHOW CONNECTORS;
diff --git a/ql/src/test/results/clientpositive/dataconnector.q.out 
b/ql/src/test/results/clientpositive/dataconnector.q.out
new file mode 100644
index 000..8b678c2
--- /dev/null
+++ b/ql/src/test/results/clientpositive/dataconnector.q.out
@@ -0,0 +1,205 @@
+PREHOOK: query: SHOW CONNECTORS
+PREHOOK: type: SHOWDATACONNECTORS
+POSTHOOK: query: SHOW CONNECTORS
+POSTHOOK: type: SHOWDATACONNECTORS
+PREHOOK: query: CREATE CONNECTOR mysql_test
+TYPE 'mysql'
+URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
+COMMENT 'test connector'
+WITH DCPROPERTIES (
+"hive.sql.dbcp.username"="hive1",
+"hive.sql.dbcp.password"="hive1")
+PREHOOK: type: CREATEDATACONNECTOR
+PREHOOK: Output: connector:mysql_test
+POSTHOOK: query: CREATE CONNECTOR mysql_test
+TYPE 'mysql'
+URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
+COMMENT 'test connector'
+WITH DCPROPERTIES (
+"hive.sql.dbcp.username"="hive1",
+"hive.sql.dbcp.password"="hive1")
+POSTHOOK: type: CREATEDATACONNECTOR
+POSTHOOK: Output: connector:mysql_test
+PREHOOK: query: SHOW CONNECTORS
+PREHOOK: type: SHOWDATACONNECTORS
+POSTHOOK: query: SHOW CONNECTORS
+POSTHOOK: type: SHOWDATACONNECTORS
+mysql_test
+PREHOOK: query: CREATE CONNECTOR IF NOT EXISTS mysql_test
+TYPE 'mysql'
+URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
+COMMENT 'test connector'
+WITH DCPROPERTIES (
+"hive.sql.dbcp.username"="hive1",
+"hive.sql.dbcp.password"="hive1")
+PREHOOK: type: CREATEDATACONNECTOR
+PREHOOK: Output: connector:mysql_test
+POSTHOOK: query: CREATE CONNECTOR IF NOT EXISTS mysql_test
+TYPE 'mysql'
+URL 'jdbc:mysql://nightly1.apache.org:3306/hive1'
+COMMENT 'test connector'
+WITH DCPROPERTIES (
+"hive.sql.dbcp.username"="hive1",
+"hive.sql.dbcp.password"="hive1")
+POSTHOOK: type: CREATEDATACONNECTOR
+POSTHOOK: Output: connector:mysql_test
+PREHOOK: query: SHOW CONNECTORS
+PREHOOK: type: SHOWDATACONNECTORS
+POSTHOOK: query: SHOW CONNECTORS
+POSTHOOK: type: SHOWDATACONNECTORS
+mysql_test
+PREHOOK: query: CREATE CONNECTOR IF NOT EXISTS derby_test
+TYPE 'derby'
+ A masked pattern was here 
+COMMENT 'test derby 

[hive] 03/38: Implemented getTable and getTableNames for MYSQL (working)

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit d65307d795ad6415de1e8714859fa0ce4bd07a87
Author: Naveen Gangam 
AuthorDate: Thu Nov 12 00:12:55 2020 -0500

Implemented getTable and getTableNames for MYSQL (working)
---
 .../database/create/CreateDatabaseAnalyzer.java|  10 +-
 .../url/AlterDataConnectorSetUrlAnalyzer.java  |   4 +-
 .../create/CreateDataConnectorAnalyzer.java|   2 +-
 .../apache/hadoop/hive/metastore/IHMSHandler.java  |  12 ++
 .../AbstractDataConnectorProvider.java |   7 +-
 .../DataConnectorProviderFactory.java  |  39 --
 .../dataconnector/IDataConnectorProvider.java  |   6 +
 .../dataconnector/jdbc/JDBCConnectorProvider.java  | 150 -
 8 files changed, 198 insertions(+), 32 deletions(-)

diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseAnalyzer.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseAnalyzer.java
index 42d7c79..d342db0 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseAnalyzer.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseAnalyzer.java
@@ -75,12 +75,11 @@ public class CreateDatabaseAnalyzer extends 
BaseSemanticAnalyzer {
 break;
   case HiveParser.TOK_DATACONNECTOR:
 type = "REMOTE";
-locationUri = "REMOTE_DATABASE"; // TODO
-// i++;
+// locationUri = "REMOTE_DATABASE"; // TODO
 ASTNode nextNode = (ASTNode) root.getChild(i);
 connectorName = ((ASTNode)nextNode).getChild(0).getText();
 outputs.add(toWriteEntity(connectorName));
-outputs.remove(toWriteEntity(locationUri));
+// outputs.remove(toWriteEntity(locationUri));
 if (managedLocationUri != null) {
   outputs.remove(toWriteEntity(managedLocationUri));
   managedLocationUri = null;
@@ -91,7 +90,6 @@ public class CreateDatabaseAnalyzer extends 
BaseSemanticAnalyzer {
   }
 }
 
-// String remoteDbName = props.get("connector.remoteDbName");
 CreateDatabaseDesc desc = null;
 Database database = new Database(databaseName, comment, locationUri, 
props);
 if (type.equalsIgnoreCase("NATIVE")) {
@@ -103,7 +101,7 @@ public class CreateDatabaseAnalyzer extends 
BaseSemanticAnalyzer {
   }
 } else {
   String remoteDbName = databaseName;
-  if (props != null && props.get("connector.remoteDbName") != null) // TODO
+  if (props != null && props.get("connector.remoteDbName") != null) // 
TODO finalize the property name
 remoteDbName = props.get("connector.remoteDbName");
   desc = new CreateDatabaseDesc(databaseName, comment, locationUri, null, 
ifNotExists, props, type,
   connectorName, remoteDbName);
@@ -112,8 +110,6 @@ public class CreateDatabaseAnalyzer extends 
BaseSemanticAnalyzer {
   database.setRemote_dbname(remoteDbName);
 }
 rootTasks.add(TaskFactory.get(new DDLWork(getInputs(), getOutputs(), 
desc)));
-
-// database = new Database(databaseName, comment, locationUri, props);
 outputs.add(new WriteEntity(database, WriteEntity.WriteType.DDL_NO_LOCK));
   }
 }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/url/AlterDataConnectorSetUrlAnalyzer.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/url/AlterDataConnectorSetUrlAnalyzer.java
index 217f702..c7b0af8 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/url/AlterDataConnectorSetUrlAnalyzer.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/url/AlterDataConnectorSetUrlAnalyzer.java
@@ -38,9 +38,7 @@ public class AlterDataConnectorSetUrlAnalyzer extends 
AbstractAlterDataConnector
   public void analyzeInternal(ASTNode root) throws SemanticException {
 String connectorName = getUnescapedName((ASTNode) root.getChild(0));
 String newURL = unescapeSQLString(root.getChild(1).getText());
-
-outputs.add(toWriteEntity(newURL));
-
+// TODO add some validation for the URL?
 AlterDataConnectorSetUrlDesc desc = new 
AlterDataConnectorSetUrlDesc(connectorName, newURL);
 addAlterDataConnectorDesc(desc);
   }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/create/CreateDataConnectorAnalyzer.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/create/CreateDataConnectorAnalyzer.java
index 5867108..a277baf 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/create/CreateDataConnectorAnalyzer.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/create/CreateDataConnectorAnalyzer.java
@@ -63,7 +63,7 @@ public class CreateDataConnectorAnalyzer extends 
BaseSemanticAnalyzer {
 break;
   case HiveParser.TOK_DATACONNECTORURL:
 url = 

[hive] 04/38: Added provider for postgres, refactored bunch of classes

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 91f1ccd9d0be977fd3222a53b3e6758f83d88132
Author: Naveen Gangam 
AuthorDate: Mon Nov 16 13:24:52 2020 -0500

Added provider for postgres, refactored bunch of classes
---
 .../hadoop/hive/metastore/conf/MetastoreConf.java  |  10 ++
 .../AbstractDataConnectorProvider.java |  45 +
 .../DataConnectorProviderFactory.java  |  13 +-
 .../dataconnector/IDataConnectorProvider.java  |   2 -
 .../JDBCConnectorProviderFactory.java  |  42 +
 ...der.java => AbstractJDBCConnectorProvider.java} | 192 ++---
 .../dataconnector/jdbc/MySQLConfigGenerator.java   |   4 -
 ...orProvider.java => MySQLConnectorProvider.java} | 137 +++
 .../jdbc/PostgreSQLConnectorProvider.java  | 116 +
 9 files changed, 373 insertions(+), 188 deletions(-)

diff --git 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
index 2e03223..3489ccc 100644
--- 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
+++ 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
@@ -2257,4 +2257,14 @@ public class MetastoreConf {
 buf.append("Finished MetastoreConf object.\n");
 return buf.toString();
   }
+
+  public static char[] getValueFromKeystore(String keystorePath, String key) 
throws IOException {
+char[] valueCharArray = null;
+if (keystorePath != null && key != null) {
+  Configuration conf = new Configuration();
+  conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH, 
keystorePath);
+  valueCharArray = conf.getPassword(key);
+}
+return valueCharArray;
+  }
 }
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
index 2a73236..74ddbdd 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
@@ -1,11 +1,19 @@
 package org.apache.hadoop.hive.metastore.dataconnector;
 
+import org.apache.hadoop.hive.metastore.TableType;
 import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.Order;
+import org.apache.hadoop.hive.metastore.api.SerDeInfo;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
 import org.apache.hadoop.hive.metastore.api.Table;
 
 import java.net.ConnectException;
+import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
 
 public abstract class AbstractDataConnectorProvider implements 
IDataConnectorProvider {
   protected String scoped_db = null;
@@ -68,4 +76,41 @@ public abstract class AbstractDataConnectorProvider 
implements IDataConnectorPro
   @Override public Table getTable(String tableName) throws MetaException {
 return null;
   }
+
+  protected Table buildTableFromColsList(String tableName, List 
cols) {
+//Setting the storage descriptor.
+StorageDescriptor sd = new StorageDescriptor();
+sd.setCols(cols);
+SerDeInfo serdeInfo = new SerDeInfo();
+serdeInfo.setName(tableName);
+serdeInfo.setSerializationLib("org.apache.hive.storage.jdbc.JdbcSerDe");
+Map serdeParams = new HashMap();
+serdeParams.put("serialization.format", "1");
+serdeInfo.setParameters(serdeParams);
+
+sd.setSerdeInfo(serdeInfo);
+sd.setInputFormat("org.apache.hive.storage.jdbc.JdbcInputFormat"); // TODO
+sd.setOutputFormat("org.apache.hive.storage.jdbc.JdbcOutputFormat"); // 
TODO
+sd.setLocation("/tmp/some_dummy_path"); // TODO
+sd.setBucketCols(new ArrayList());
+sd.setSortCols(new ArrayList());
+
+//Setting the required table information
+Table table = new Table();
+table.setTableName(tableName);
+table.setTableType(TableType.EXTERNAL_TABLE.toString());
+table.setDbName(scoped_db);
+table.setSd(sd);
+// set table properties that subclasses can fill-in
+table.setParameters(new HashMap());
+// set partition keys to empty
+table.setPartitionKeys(new
+ArrayList());
+
+return table;
+  }
+
+  abstract protected String getInputClass();
+
+  abstract protected String getOutputClass();
 

[hive] 06/38: HIVE-24396: Build failure in itests due to unimplemented interface methods

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 0b9a4f4304d4b7c7a6ab77fad1b1bae26cb7eede
Author: Naveen Gangam 
AuthorDate: Tue Nov 17 00:42:47 2020 -0500

HIVE-24396: Build failure in itests due to unimplemented interface methods
---
 .../hcatalog/listener/DummyRawStoreFailEvent.java  | 27 ++
 .../ql/ddl/database/create/CreateDatabaseDesc.java |  3 +--
 .../database/create/CreateDatabaseOperation.java   |  3 +--
 3 files changed, 29 insertions(+), 4 deletions(-)

diff --git 
a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
 
b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
index f93a3c7..2d7ad24 100644
--- 
a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
+++ 
b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
@@ -51,6 +51,7 @@ import 
org.apache.hadoop.hive.metastore.api.AlreadyExistsException;
 import org.apache.hadoop.hive.metastore.api.ColumnStatistics;
 import org.apache.hadoop.hive.metastore.api.CreationMetadata;
 import org.apache.hadoop.hive.metastore.api.CurrentNotificationEventId;
+import org.apache.hadoop.hive.metastore.api.DataConnector;
 import org.apache.hadoop.hive.metastore.api.Database;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.apache.hadoop.hive.metastore.api.FileMetadataExprType;
@@ -245,6 +246,32 @@ public class DummyRawStoreFailEvent implements RawStore, 
Configurable {
   }
 
   @Override
+  public List getAllDataConnectors() throws MetaException {
+return objectStore.getAllDataConnectors();
+  }
+
+  @Override
+  public DataConnector getDataConnector(String connectorName) throws 
NoSuchObjectException {
+return objectStore.getDataConnector(connectorName);
+  }
+
+  @Override
+  public boolean alterDataConnector(String connectorName, DataConnector 
connector)
+  throws MetaException, NoSuchObjectException {
+return objectStore.alterDataConnector(connectorName, connector);
+  }
+
+  @Override
+  public boolean dropDataConnector(String connector) throws MetaException, 
NoSuchObjectException {
+return objectStore.dropDataConnector(connector);
+  }
+
+  @Override
+  public void createDataConnector(DataConnector connector) throws 
MetaException, InvalidObjectException {
+objectStore.createDataConnector(connector);
+  }
+
+  @Override
   public boolean createType(Type type) {
 return objectStore.createType(type);
   }
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseDesc.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseDesc.java
index 45df31c..1590133 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseDesc.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseDesc.java
@@ -34,7 +34,6 @@ import javax.xml.crypto.Data;
 @Explain(displayName = "Create Database", explainLevels = { Level.USER, 
Level.DEFAULT, Level.EXTENDED })
 public class CreateDatabaseDesc implements DDLDesc, Serializable {
   private static final long serialVersionUID = 1L;
-  public static final String REMOTEDB_LOCATION = "REMOTE_LOCATION".intern();
 
   private final String databaseName;
   private final String comment;
@@ -57,9 +56,9 @@ public class CreateDatabaseDesc implements DDLDesc, 
Serializable {
 this.comment = comment;
 if (dbtype != null && dbtype.equalsIgnoreCase("REMOTE")) {
   this.dbType = DatabaseType.REMOTE;
-  this.locationUri = REMOTEDB_LOCATION; // this is non-null in the HMSDB
   this.connectorName = connectorName;
   this.remoteDbName = remoteDbName;
+  this.locationUri = null;
   this.managedLocationUri = null;
 } else {
   this.dbType = DatabaseType.NATIVE;
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
index 97e318d..def742d 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/create/CreateDatabaseOperation.java
@@ -54,15 +54,14 @@ public class CreateDatabaseOperation extends 
DDLOperation {
 if (desc.getManagedLocationUri() != null) {
   database.setManagedLocationUri(desc.getManagedLocationUri());
 }
-makeLocationQualified(database);
 if 
(database.getLocationUri().equalsIgnoreCase(database.getManagedLocationUri())) {
   throw new HiveException("Managed and external locations for database 
cannot be the same");
 }
   } else {
-database.setLocationUri(CreateDatabaseDesc.REMOTEDB_LOCATION);
 

[hive] 05/38: Deleted commented out code and fixed location and IO classes

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 80013b401ce13311d923896eff7e5fff96d5ec5e
Author: Naveen Gangam 
AuthorDate: Mon Nov 16 14:13:38 2020 -0500

Deleted commented out code and fixed location and IO classes
---
 .../apache/hadoop/hive/metastore/Warehouse.java|  6 +--
 .../AbstractDataConnectorProvider.java |  8 +--
 .../jdbc/AbstractJDBCConnectorProvider.java| 60 ++
 .../dataconnector/jdbc/MySQLConnectorProvider.java | 57 
 4 files changed, 22 insertions(+), 109 deletions(-)

diff --git 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/Warehouse.java
 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/Warehouse.java
index ef52ed9..608c56f 100755
--- 
a/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/Warehouse.java
+++ 
b/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/Warehouse.java
@@ -227,7 +227,7 @@ public class Warehouse {
*/
   public Path determineDatabasePath(Catalog cat, Database db) throws 
MetaException {
 if (db.getType() == DatabaseType.REMOTE) {
-  return getRemoteDatabasePath();
+  return getDefaultDatabasePath(db.getName(), true);
 }
 if (db.isSetLocationUri()) {
   return getDnsPath(new Path(db.getLocationUri()));
@@ -334,10 +334,6 @@ public class Warehouse {
 }
   }
 
-  public Path getRemoteDatabasePath() throws MetaException {
-return new Path(getWhRootExternal(), "dummy_path_for_remote_database.db");
-  }
-
   private boolean hasExternalWarehouseRoot() {
 return !StringUtils.isBlank(whRootExternalString);
   }
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
index 74ddbdd..04a5842 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/AbstractDataConnectorProvider.java
@@ -89,9 +89,9 @@ public abstract class AbstractDataConnectorProvider 
implements IDataConnectorPro
 serdeInfo.setParameters(serdeParams);
 
 sd.setSerdeInfo(serdeInfo);
-sd.setInputFormat("org.apache.hive.storage.jdbc.JdbcInputFormat"); // TODO
-sd.setOutputFormat("org.apache.hive.storage.jdbc.JdbcOutputFormat"); // 
TODO
-sd.setLocation("/tmp/some_dummy_path"); // TODO
+sd.setInputFormat(getInputClass());
+sd.setOutputFormat(getOutputClass());
+sd.setLocation(getTableLocation(tableName));
 sd.setBucketCols(new ArrayList());
 sd.setSortCols(new ArrayList());
 
@@ -113,4 +113,6 @@ public abstract class AbstractDataConnectorProvider 
implements IDataConnectorPro
   abstract protected String getInputClass();
 
   abstract protected String getOutputClass();
+
+  abstract protected String getTableLocation(String tblName);
 }
diff --git 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
index d436123..62a7786 100644
--- 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
+++ 
b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
@@ -1,5 +1,6 @@
 package org.apache.hadoop.hive.metastore.dataconnector.jdbc;
 
+import org.apache.hadoop.hive.metastore.Warehouse;
 import org.apache.hadoop.hive.metastore.api.DataConnector;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.apache.hadoop.hive.metastore.api.MetaException;
@@ -22,6 +23,7 @@ import java.util.Map;
 
 public abstract class AbstractJDBCConnectorProvider extends 
AbstractDataConnectorProvider {
   private static Logger LOG = 
LoggerFactory.getLogger(AbstractJDBCConnectorProvider.class);
+  protected static Warehouse warehouse = null;
 
   // duplicate constants from Constants.java to avoid a dependency on 
hive-common
   public static final String JDBC_HIVE_STORAGE_HANDLER_ID =
@@ -72,6 +74,10 @@ public abstract class AbstractJDBCConnectorProvider extends 
AbstractDataConnecto
 LOG.warn("Could not read key value from keystore");
   }
 }
+
+try {
+  warehouse = new Warehouse(MetastoreConf.newMetastoreConf());
+} catch (MetaException e) { /* ignore */ }
   }
 
   

[hive] 02/38: Adding DDL support for connectors (create/drop/show/desc/alter)

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 87c74ec2d5f86e911d7b832f4d58b5e5e8f4c161
Author: Naveen Gangam 
AuthorDate: Tue Nov 10 12:27:03 2020 -0500

Adding DDL support for connectors (create/drop/show/desc/alter)
---
 .../hadoop/hive/ql/parse/AlterClauseParser.g   |  4 +-
 .../apache/hadoop/hive/ql/parse/CreateDDLParser.g  |  9 ++-
 .../hadoop/hive/ql/parse/IdentifiersParser.g   |  2 +-
 .../ql/ddl/database/create/CreateDatabaseDesc.java |  9 ---
 .../alter/AbstractAlterDataConnectorAnalyzer.java  | 42 +++
 .../alter/AbstractAlterDataConnectorDesc.java  | 54 +
 .../alter/AbstractAlterDataConnectorOperation.java | 59 +++
 .../owner/AlterDataConnectorSetOwnerAnalyzer.java  | 54 +
 .../owner/AlterDataConnectorSetOwnerDesc.java  | 45 +++
 .../owner/AlterDataConnectorSetOwnerOperation.java | 41 ++
 .../AlterDataConnectorSetPropertiesAnalyzer.java   | 58 ++
 .../AlterDataConnectorSetPropertiesDesc.java   | 47 
 .../AlterDataConnectorSetPropertiesOperation.java  | 49 
 .../url/AlterDataConnectorSetUrlAnalyzer.java  | 47 
 .../alter/url/AlterDataConnectorSetUrlDesc.java| 43 +++
 .../url/AlterDataConnectorSetUrlOperation.java | 65 
 .../create/CreateDataConnectorAnalyzer.java| 88 ++
 .../create/CreateDataConnectorDesc.java| 80 
 .../create/CreateDataConnectorOperation.java   | 71 +
 .../desc/DescDataConnectorAnalyzer.java| 61 +++
 .../dataconnector/desc/DescDataConnectorDesc.java  | 71 +
 .../desc/DescDataConnectorOperation.java   | 62 +++
 .../drop/DropDataConnectorAnalyzer.java| 59 +++
 .../dataconnector/drop/DropDataConnectorDesc.java  | 62 +++
 .../drop/DropDataConnectorOperation.java   | 65 
 .../show/ShowDataConnectorsAnalyzer.java   | 57 ++
 .../dataconnector/show/ShowDataConnectorsDesc.java | 54 +
 .../show/ShowDataConnectorsOperation.java  | 66 
 28 files changed, 1411 insertions(+), 13 deletions(-)

diff --git 
a/parser/src/java/org/apache/hadoop/hive/ql/parse/AlterClauseParser.g 
b/parser/src/java/org/apache/hadoop/hive/ql/parse/AlterClauseParser.g
index c5074be..4625618 100644
--- a/parser/src/java/org/apache/hadoop/hive/ql/parse/AlterClauseParser.g
+++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/AlterClauseParser.g
@@ -463,8 +463,8 @@ alterDataConnectorStatementSuffix
 alterDataConnectorSuffixProperties
 @init { gParent.pushMsg("alter connector set properties statement", state); }
 @after { gParent.popMsg(state); }
-: name=identifier KW_SET KW_DBPROPERTIES dbProperties
--> ^(TOK_ALTERDATACONNECTOR_PROPERTIES $name dbProperties)
+: name=identifier KW_SET KW_DCPROPERTIES dcProperties
+-> ^(TOK_ALTERDATACONNECTOR_PROPERTIES $name dcProperties)
 ;
 
 alterDataConnectorSuffixSetOwner
diff --git a/parser/src/java/org/apache/hadoop/hive/ql/parse/CreateDDLParser.g 
b/parser/src/java/org/apache/hadoop/hive/ql/parse/CreateDDLParser.g
index 434ae57..69da7c7 100644
--- a/parser/src/java/org/apache/hadoop/hive/ql/parse/CreateDDLParser.g
+++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/CreateDDLParser.g
@@ -112,7 +112,7 @@ createTableStatement
 createDataConnectorStatement
 @init { gParent.pushMsg("create connector statement", state); }
 @after { gParent.popMsg(state); }
-: KW_CREATE KW_DATACONNECTOR ifNotExists? name=identifier 
dataConnectorType dataConnectorUrl dataConnectorComment? ( KW_WITH 
KW_PROPERTIES dcprops=dbProperties)?
+: KW_CREATE KW_DATACONNECTOR ifNotExists? name=identifier 
dataConnectorType dataConnectorUrl dataConnectorComment? ( KW_WITH 
KW_DCPROPERTIES dcprops=dcProperties)?
 -> ^(TOK_CREATEDATACONNECTOR $name ifNotExists? dataConnectorType 
dataConnectorUrl dataConnectorComment? $dcprops?)
 ;
 
@@ -137,6 +137,13 @@ dataConnectorType
 -> ^(TOK_DATACONNECTORTYPE $dcType)
 ;
 
+dcProperties
+@init { gParent.pushMsg("dcproperties", state); }
+@after { gParent.popMsg(state); }
+:
+  LPAREN dbPropertiesList RPAREN -> ^(TOK_DATACONNECTORPROPERTIES 
dbPropertiesList)
+;
+
 dropDataConnectorStatement
 @init { gParent.pushMsg("drop connector statement", state); }
 @after { gParent.popMsg(state); }
diff --git 
a/parser/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 
b/parser/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
index 9402471..d836b80 100644
--- a/parser/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
+++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
@@ -913,7 +913,7 @@ nonReserved
 KW_ABORT | KW_ADD | KW_ADMIN 

[hive] branch master updated (46ddd5a -> 2eb0e00)

2021-04-06 Thread ngangam
This is an automated email from the ASF dual-hosted git repository.

ngangam pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 46ddd5a  HIVE-24895. Add a DataCopyEnd stage in ReplStateLogTask for 
external table replication. (#2083)(Ayush Saxena, reviewed by Pravin Kumar 
Sinha, Aasha Medhi)
 new 69e5417  External metastore: clean after rebase
 new 87c74ec  Adding DDL support for connectors 
(create/drop/show/desc/alter)
 new d65307d  Implemented getTable and getTableNames for MYSQL (working)
 new 91f1ccd  Added provider for postgres, refactored bunch of classes
 new 80013b4  Deleted commented out code and fixed location and IO classes
 new 0b9a4f4  HIVE-24396: Build failure in itests due to unimplemented 
interface methods
 new c6ed378  NullPointerException in CreateDatabaseOperation due to last 
change
 new 013a693  Adding a qtest and fixing type for default db
 new b30173d  Adding schema changes for mysql and postgres as well
 new 1523cd4  HIVE-24396: getTable/getTables API not expected to throw 
NoSuchObjectException
 new 7d91a9a  HIVE-24396: Added schema changes for Oracle Made 
DBS.TYPE NOT NULL in all scripts Added Type support to 
DatabaseBuilder Added Unit test for DataConnector Added 
Unit test REMOTE Database Fixed test failures in 
TestSchemaToolForMetaStore
 new 6369daa  Missed change from the rebase
 new 7504491  HIVE-24396: Fix for drop database for remote databases
 new 683d0ae  HIVE-24396: qtest failures, regenerate them because of new 
columns
 new df8cb11  HIVE-24396: Follow up test failure fixes
 new a2a592f  Test failures with tez driver and duplicate error codes
 new 3de6032  HIVE-24396: refactored code to Abstract class and providers 
share common code
 new 23aafb7  Build issue with EventMessage
 new d66d5fc  fix for 2 additional test failures
 new 0b60db9  Retain case on table names during query processing
 new 9937963  HIVE-24396 Moving create/drop/alter APIs to the interface. 
Reverting fix for case sensitivity
 new 5a2236e  HIVE-24396: Build failure due to duplicate db definitions
 new 60ae013  HIVE-24396: Addressing test failures
 new 3e41782  HIVE-24396: Unhandled longvarchar and integer types for derby
 new 4779e86  HIVE-24396: Fix to CachedStore to make DBs NATIVE and fix to 
create_table_core on null DBs
 new 5c98e30  HIVE-24396: get_table_core() to return null instead of 
exception
 new 4319d29  HIVE-24396: Fix in connector provider to return null instead 
of blank Table
 new 2b4fa4e  HIVE-24396: Database name for remote table should be set to 
hive dbname not the scoped dbname
 new 506621c  HIVE-24396: Fix for NPE in get_database_core with null 
catalog name
 new d7a8eb7  HIVE-24396: Some changes with formatters after the rebase 
(Naveen Gangam)
 new 5f5ec66  HIVE-24396: Duplicate SQL statements in derby upgrade script 
(Naveen Gangam)
 new b9150e1  HIVE-24396: Incorporating feedback from the initial review 
(Naveen Gangam)
 new 14283d3  HIVE-24396: Cleanup and one test failure (Naveen Gangam)
 new 07ebf02  HIVE-24396: qtest failure (Naveen Gangam)
 new 53edee9  HIVE-24396: Changes from additional feedback from code review 
(Naveen Gangam)
 new cd90398  HIVE-24396: Conflict from rebase to master
 new 34720cf  HIVE-24396: Remaining comments from the feedback (Naveen 
Gangam)
 new 2eb0e00  HIVE-24396: Additional feedback incorporated (Naveen Gangam)  
   Removed ReplicationSpec for connectors Notification 
event for alter connector removed some code.

The 38 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../java/org/apache/hadoop/hive/ql/ErrorMsg.java   | 3 +
 .../hcatalog/listener/DummyRawStoreFailEvent.java  |27 +
 .../hadoop/hive/ql/parse/AlterClauseParser.g   |31 +
 .../apache/hadoop/hive/ql/parse/CreateDDLParser.g  |42 +
 .../apache/hadoop/hive/ql/parse/HiveLexerParent.g  | 6 +
 .../org/apache/hadoop/hive/ql/parse/HiveParser.g   |35 +-
 .../hadoop/hive/ql/parse/IdentifiersParser.g   | 8 +-
 pom.xml| 3 +
 .../database/create/CreateDatabaseAnalyzer.java|37 +-
 .../ql/ddl/database/create/CreateDatabaseDesc.java |41 +-
 .../database/create/CreateDatabaseOperation.java   |24 +-
 .../ql/ddl/database/desc/DescDatabaseDesc.java | 6 +-
 .../ddl/database/desc/DescDatabaseFormatter.java   |23 +-
 .../ddl/database/desc/DescDatabaseOperation.java   |27 +-
 .../alter/AbstractAlterDataConnectorAnalyzer.java  |42 +
 .../alter/AbstractAlterDataConnectorDesc.java  |30 +-
 

[hive] branch master updated (b2537d1 -> 46ddd5a)

2021-04-06 Thread aasha
This is an automated email from the ASF dual-hosted git repository.

aasha pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from b2537d1  HIVE-24953: Set repl.source.for property in the db if db is 
under replication in incremental dump. (#2133) (Haymant Mangla, reviewed by 
Aasha Medhi)
 add 46ddd5a  HIVE-24895. Add a DataCopyEnd stage in ReplStateLogTask for 
external table replication. (#2083)(Ayush Saxena, reviewed by Pravin Kumar 
Sinha, Aasha Medhi)

No new revisions were added by this update.

Summary of changes:
 .../TestReplicationScenariosExternalTables.java| 74 ++
 .../hadoop/hive/ql/exec/repl/DirCopyTask.java  | 17 -
 .../hadoop/hive/ql/exec/repl/ReplDumpTask.java | 28 
 .../hadoop/hive/ql/exec/repl/ReplDumpWork.java | 14 
 .../hadoop/hive/ql/exec/repl/ReplLoadTask.java | 17 +++--
 .../hadoop/hive/ql/exec/repl/ReplStateLogTask.java |  2 -
 .../hadoop/hive/ql/exec/repl/ReplStateLogWork.java | 14 +++-
 .../events/filesystem/BootstrapEventsIterator.java | 10 ++-
 .../incremental/IncrementalLoadTasksBuilder.java   |  5 ++
 .../hadoop/hive/ql/exec/repl/util/ReplUtils.java   | 31 +
 .../hadoop/hive/ql/parse/repl/ReplLogger.java  |  8 +++
 .../hadoop/hive/ql/parse/repl/ReplState.java   |  3 +-
 .../parse/repl/load/log/BootstrapLoadLogger.java   |  8 +++
 .../ql/parse/repl/load/log/state/DataCopyEnd.java  | 17 +++--
 .../apache/hadoop/hive/shims/Hadoop23Shims.java| 13 +++-
 15 files changed, 228 insertions(+), 33 deletions(-)
 copy 
itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestJdbcWithMiniKdcSQLAuthHttp.java
 => 
ql/src/java/org/apache/hadoop/hive/ql/parse/repl/load/log/state/DataCopyEnd.java
 (68%)


[hive] branch master updated (15634af -> b2537d1)

2021-04-06 Thread aasha
This is an automated email from the ASF dual-hosted git repository.

aasha pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 15634af  HIVE-24840: Materialized View incremental rebuild produces 
wrong result set after compaction (Krisztian Kasa, reviewed by Jesus Camacho 
Rodriguez, Peter Vary) ADDENDUM - fix thrift
 add b2537d1  HIVE-24953: Set repl.source.for property in the db if db is 
under replication in incremental dump. (#2133) (Haymant Mangla, reviewed by 
Aasha Medhi)

No new revisions were added by this update.

Summary of changes:
 .../hive/ql/parse/TestReplicationScenarios.java| 18 +
 .../parse/TestScheduledReplicationScenarios.java   | 14 +++
 .../hadoop/hive/ql/exec/repl/ReplDumpTask.java | 47 +-
 3 files changed, 59 insertions(+), 20 deletions(-)


[hive] branch master updated (71d0838 -> 15634af)

2021-04-06 Thread krisztiankasa
This is an automated email from the ASF dual-hosted git repository.

krisztiankasa pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 71d0838  HIVE-24955: New metrics about aborted transactions (Karen 
Coppage, reviewed by Denys Kuzmenko)
 add 15634af  HIVE-24840: Materialized View incremental rebuild produces 
wrong result set after compaction (Krisztian Kasa, reviewed by Jesus Camacho 
Rodriguez, Peter Vary) ADDENDUM - fix thrift

No new revisions were added by this update.

Summary of changes:
 .../src/gen/thrift/gen-cpp/hive_metastore_types.cpp  | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)


[hive] branch master updated (ea7589d -> 71d0838)

2021-04-06 Thread klcopp
This is an automated email from the ASF dual-hosted git repository.

klcopp pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from ea7589d  HIVE-24937 : Fix sync bottleneck in SyslogParser
 add 71d0838  HIVE-24955: New metrics about aborted transactions (Karen 
Coppage, reviewed by Denys Kuzmenko)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hive/ql/lockmgr/TestDbTxnManager.java   | 11 +++-
 .../ql/txn/compactor/TestCompactionMetrics.java| 59 +-
 .../hive/metastore/metrics/AcidMetricService.java  |  6 +++
 .../hive/metastore/metrics/MetricsConstants.java   |  9 +++-
 .../hive/metastore/txn/CompactionTxnHandler.java   |  4 ++
 .../hadoop/hive/metastore/txn/MetricsInfo.java | 27 ++
 .../hadoop/hive/metastore/txn/TxnHandler.java  | 18 +--
 7 files changed, 114 insertions(+), 20 deletions(-)