[jira] [Comment Edited] (HIVE-26555) Read-only mode for Hive database

2023-01-02 Thread Teddy Choi (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653808#comment-17653808
 ] 

Teddy Choi edited comment on HIVE-26555 at 1/3/23 7:03 AM:
---

[~abstractdog], sorry for late reply.

It's assuming an [active-passive HA 
configuration|https://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations]
 with reads on the passive. The active instance should be the single source of 
the truth, while the passive instance should follow it. However, the current 
Hive replication design allows the passive instance to diverge from the active 
instance. A data divergence between the active-passive instances is hard to 
detect and resolve. This read-only mode prevents the passive instance to change 
to avoid any unintended divergence.

References
 * Microsoft SQL Server: [Configure read-only access to a secondary replica of 
an Always On availability 
group|https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/configure-read-only-access-on-an-availability-replica-sql-server?view=sql-server-ver16]
 * Oracle Database: [High Availability Overview and Best Practices - Features 
for Maximizing 
Availability|https://docs.oracle.com/en/database/oracle/oracle-database/21/haovw/ha-features.html#GUID-314F15CE-BD8F-45B0-911E-B7FCC2B8006A]
 * IBM DB2: [Enabling reads on 
standby|https://www.ibm.com/docs/en/db2/11.5?topic=feature-enabling-reads-standby]

 


was (Author: teddy.choi):
[~abstractdog], sorry for late reply.

It's assuming an [active-passive HA 
configuration|https://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations]
 with reads on the passive. The active instance should be the single source of 
the truth, while the passive instance should follow it. However, the current 
Hive replication design allows the passive instance to diverge from the active 
instance. A data divergence between the active-passive instances is hard to 
detect and resolve. This read-only mode prevents the passive instance to change 
to avoid any unintended divergence.

References
 * Microsoft SQL Server: [Configure read-only access to a secondary replica of 
an Always On availability 
group|https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/configure-read-only-access-on-an-availability-replica-sql-server?view=sql-server-ver16]
 * Oracle Database: [High Availability Overview and Best Practices | Features 
for Maximizing 
Availability|https://docs.oracle.com/en/database/oracle/oracle-database/21/haovw/ha-features.html#GUID-314F15CE-BD8F-45B0-911E-B7FCC2B8006A]
 * IBM DB2: [Enabling reads on 
standby|https://www.ibm.com/docs/en/db2/11.5?topic=feature-enabling-reads-standby]

 

> Read-only mode for Hive database
> 
>
> Key: HIVE-26555
> URL: https://issues.apache.org/jira/browse/HIVE-26555
> Project: Hive
>  Issue Type: New Feature
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> h1. Purpose
> In failover/fail-back scenarios, a Hive database needs to be read-only, while 
> other one is writable to keep a single source of truth.
> h1. User-Facing Changes
> Yes. EnforceReadOnlyDatabaseHook class implements ExecuteWithHookContext 
> interface. hive.exec.pre.hooks needs to have the class name to initiate an 
> instance. The "readonly" database property can be configured to turn it on 
> and off.
> h2. Allowed read operations
> All read operations without any data/metadata change are allowed.
>  * EXPLAIN
>  * USE(or SWITCHDATABASE)
>  * REPLDUMP
>  * REPLSTATUS
>  * EXPORT
>  * KILL_QUERY
>  * DESC prefix
>  * SHOW prefix
>  * QUERY with SELECT or EXPLAIN. INSERT, DELETE, UPDATE are disallowed.
> h2. Allowed write operations
> Most of write operations that change data/metadata are disallowed. There are 
> few allowed exceptions. The first one is alter database to make a database 
> writable. The second one is replication load to load a dumped database.
>  * ALTER DATABASE db_name SET DBPROPERTIES without "readonly"="true".
>  * REPLLOAD
> h1. Tests
>  * read_only_hook.q: USE, SHOW, DESC, DESCRIBE, EXPLAIN, SELECT
>  * read_only_delete.q
>  * read_only_insert.q



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26901) Add metrics on transactions in replication metrics table

2023-01-02 Thread Amit Saonerkar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Saonerkar reassigned HIVE-26901:
-


> Add metrics on transactions in replication metrics table 
> -
>
> Key: HIVE-26901
> URL: https://issues.apache.org/jira/browse/HIVE-26901
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Amit Saonerkar
>Assignee: Amit Saonerkar
>Priority: Major
>
> This is related to corresponding 
> [https://jira.cloudera.com/browse/CDPD-17985?filter=-1]
> We need to enahnce replication metrics table information by adding 
> informations related to transactions during REPL DUMP/LOAD operations. 
> Basically idea here is to give user a picture about how transaction are 
> making progress during dump and load operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26893) Extend batch partition APIs to ignore partition schemas

2023-01-02 Thread Sai Hemanth Gantasala (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653809#comment-17653809
 ] 

Sai Hemanth Gantasala commented on HIVE-26893:
--

cc [~ngangam] 

> Extend batch partition APIs to ignore partition schemas
> ---
>
> Key: HIVE-26893
> URL: https://issues.apache.org/jira/browse/HIVE-26893
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Quanlong Huang
>Priority: Major
>
> There are several HMS APIs that return a list of partitions, e.g. 
> get_partitions_ps(), get_partitions_by_names(), add_partitions_req() with 
> needResult=true, etc. Each partition instance will have a unique list of 
> FieldSchemas as the partition schema:
> {code:java}
> org.apache.hadoop.hive.metastore.api.Partition
> -> org.apache.hadoop.hive.metastore.api.StorageDescriptor
>->  cols: list {code}
> This could occupy a large memory footprint for wide tables (e.g. with 2k 
> cols). See the heap histogram in IMPALA-11812 as an example.
> Some engines like Impala doesn't actually use/respect the partition level 
> schema. It's a waste of network/serde resource to transmit them. It'd be nice 
> if these APIs provide an optional boolean flag for ignoring partition 
> schemas. So HMS clients (e.g. Impala) don't need to clear them later (to save 
> mem).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26555) Read-only mode for Hive database

2023-01-02 Thread Teddy Choi (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653808#comment-17653808
 ] 

Teddy Choi commented on HIVE-26555:
---

[~abstractdog], sorry for late reply.

It's assuming an [active-passive HA 
configuration|https://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations]
 with reads on the passive. The active instance should be the single source of 
the truth, while the passive instance should follow it. However, the current 
Hive replication design allows the passive instance to diverge from the active 
instance. A data divergence between the active-passive instances is hard to 
detect and resolve. This read-only mode prevents the passive instance to change 
to avoid any unintended divergence.

References
 * Microsoft SQL Server: [Configure read-only access to a secondary replica of 
an Always On availability 
group|https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/configure-read-only-access-on-an-availability-replica-sql-server?view=sql-server-ver16]
 * Oracle Database: [High Availability Overview and Best Practices | Features 
for Maximizing 
Availability|https://docs.oracle.com/en/database/oracle/oracle-database/21/haovw/ha-features.html#GUID-314F15CE-BD8F-45B0-911E-B7FCC2B8006A]
 * IBM DB2: [Enabling reads on 
standby|https://www.ibm.com/docs/en/db2/11.5?topic=feature-enabling-reads-standby]

 

> Read-only mode for Hive database
> 
>
> Key: HIVE-26555
> URL: https://issues.apache.org/jira/browse/HIVE-26555
> Project: Hive
>  Issue Type: New Feature
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> h1. Purpose
> In failover/fail-back scenarios, a Hive database needs to be read-only, while 
> other one is writable to keep a single source of truth.
> h1. User-Facing Changes
> Yes. EnforceReadOnlyDatabaseHook class implements ExecuteWithHookContext 
> interface. hive.exec.pre.hooks needs to have the class name to initiate an 
> instance. The "readonly" database property can be configured to turn it on 
> and off.
> h2. Allowed read operations
> All read operations without any data/metadata change are allowed.
>  * EXPLAIN
>  * USE(or SWITCHDATABASE)
>  * REPLDUMP
>  * REPLSTATUS
>  * EXPORT
>  * KILL_QUERY
>  * DESC prefix
>  * SHOW prefix
>  * QUERY with SELECT or EXPLAIN. INSERT, DELETE, UPDATE are disallowed.
> h2. Allowed write operations
> Most of write operations that change data/metadata are disallowed. There are 
> few allowed exceptions. The first one is alter database to make a database 
> writable. The second one is replication load to load a dumped database.
>  * ALTER DATABASE db_name SET DBPROPERTIES without "readonly"="true".
>  * REPLLOAD
> h1. Tests
>  * read_only_hook.q: USE, SHOW, DESC, DESCRIBE, EXPLAIN, SELECT
>  * read_only_delete.q
>  * read_only_insert.q



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26900) Error message not representing the correct line number with a syntax error in a HQL File

2023-01-02 Thread Vikram Ahuja (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Ahuja updated HIVE-26900:

Affects Version/s: 4.0.0-alpha-2
   4.0.0-alpha-1
   3.1.2

> Error message not representing the correct line number with a syntax error in 
> a HQL File
> 
>
> Key: HIVE-26900
> URL: https://issues.apache.org/jira/browse/HIVE-26900
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.2, 4.0.0-alpha-1, 4.0.0-alpha-2
>Reporter: Vikram Ahuja
>Priority: Minor
>
> When a wrong syntax is added in a HQL file, the error thrown by beeline while 
> running the HQL file is having the wrong line number.  The line number and 
> even the position is incorrect. Seems like parser is not considering spaces 
> and new lines and always throwing the error on line number 1 irrespective of 
> what line the error is on in the HQL file
>  
> For instance, consider the following test.hql file:
>  # --comment
>  # --comment
>  # SET hive.server2.logging.operation.enabled=true;
>  # SET hive.server2.logging.operation.level=VERBOSE;
>  # show tables;
>  #  
>  #  
>  #       CREATE TABLEE DUMMY;
>  
> when we call !run  test.hql in beeline or trigger ./beeline -u 
> jdbc:hive2://localhost:1 -f test.hql, The issue thrown by beeline is
> >>> CREATE TABLEE DUMMY;
> Error: Error while compiling statement: FAILED: ParseException line 1:7 
> cannot recongize input near 'CREATE' 'TABLEE' 'DUMMY' in ddl statement 
> (state=42000,code=4)
> The parser seems to be taking all the lines from 1 and is ignoring spaces in 
> the line.
> The error line in the parse exception is shown as 1:7 but it should have been 
> 8:13.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26900) Error message not representing the correct line number with a syntax error in a HQL File

2023-01-02 Thread Vikram Ahuja (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Ahuja updated HIVE-26900:

Description: 
When a wrong syntax is added in a HQL file, the error thrown by beeline while 
running the HQL file is having the wrong line number.  The line number and even 
the position is incorrect. Seems like parser is not considering spaces and new 
lines and always throwing the error on line number 1 irrespective of what line 
the error is on in the HQL file

 

For instance, consider the following test.hql file:
 # --comment
 # --comment
 # SET hive.server2.logging.operation.enabled=true;
 # SET hive.server2.logging.operation.level=VERBOSE;
 # show tables;
 #  
 #  
 #       CREATE TABLEE DUMMY;

 

when we call !run  test.hql in beeline or trigger ./beeline -u 
jdbc:hive2://localhost:1 -f test.hql, The issue thrown by beeline is

>>> CREATE TABLEE DUMMY;

Error: Error while compiling statement: FAILED: ParseException line 1:7 cannot 
recongize input near 'CREATE' 'TABLEE' 'DUMMY' in ddl statement 
(state=42000,code=4)

The parser seems to be taking all the lines from 1 and is ignoring spaces in 
the line.

The error line in the parse exception is shown as 1:7 but it should have been 
8:13.

  was:When a wrong syntax is added in a HQL file, the error thrown by beeline 
while running the HQL file is having the wrong line number.  The line number 
and even the position is incorrect. Seems like parser is not considering spaces 
and new lines and always throwing the error on line number 1 irrespective of 
what line the error is on in the HQL file


> Error message not representing the correct line number with a syntax error in 
> a HQL File
> 
>
> Key: HIVE-26900
> URL: https://issues.apache.org/jira/browse/HIVE-26900
> Project: Hive
>  Issue Type: Bug
>Reporter: Vikram Ahuja
>Priority: Minor
>
> When a wrong syntax is added in a HQL file, the error thrown by beeline while 
> running the HQL file is having the wrong line number.  The line number and 
> even the position is incorrect. Seems like parser is not considering spaces 
> and new lines and always throwing the error on line number 1 irrespective of 
> what line the error is on in the HQL file
>  
> For instance, consider the following test.hql file:
>  # --comment
>  # --comment
>  # SET hive.server2.logging.operation.enabled=true;
>  # SET hive.server2.logging.operation.level=VERBOSE;
>  # show tables;
>  #  
>  #  
>  #       CREATE TABLEE DUMMY;
>  
> when we call !run  test.hql in beeline or trigger ./beeline -u 
> jdbc:hive2://localhost:1 -f test.hql, The issue thrown by beeline is
> >>> CREATE TABLEE DUMMY;
> Error: Error while compiling statement: FAILED: ParseException line 1:7 
> cannot recongize input near 'CREATE' 'TABLEE' 'DUMMY' in ddl statement 
> (state=42000,code=4)
> The parser seems to be taking all the lines from 1 and is ignoring spaces in 
> the line.
> The error line in the parse exception is shown as 1:7 but it should have been 
> 8:13.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26837) CTLT with hive.create.as.external.legacy as true creates managed table instead of external table

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26837?focusedWorklogId=836500&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836500
 ]

ASF GitHub Bot logged work on HIVE-26837:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 05:27
Start Date: 03/Jan/23 05:27
Worklog Time Spent: 10m 
  Work Description: saihemanth-cloudera commented on code in PR #3854:
URL: https://github.com/apache/hive/pull/3854#discussion_r1060286475


##
ql/src/test/results/clientpositive/llap/ctlt_translate_external.q.out:
##
@@ -0,0 +1,108 @@
+PREHOOK: query: create table test_mm(empno int, name string) partitioned 
by(dept string) stored as orc tblproperties('transactional'='true', 
'transactional_properties'='default')
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@test_mm
+POSTHOOK: query: create table test_mm(empno int, name string) partitioned 
by(dept string) stored as orc tblproperties('transactional'='true', 
'transactional_properties'='default')
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@test_mm
+PREHOOK: query: create table test_external like test_mm
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@test_external
+POSTHOOK: query: create table test_external like test_mm
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@test_external
+PREHOOK: query: desc formatted test_external
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@test_external
+POSTHOOK: query: desc formatted test_external
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@test_external
+# col_name data_type   comment 
+empno  int 
+name   string  
+
+# Partition Information 
+# col_name data_type   comment 
+dept   string  
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
+   bucketing_version   2   
+   numFiles0   
+   numPartitions   0   
+   numRows 0   
+   rawDataSize 0   
+   totalSize   0   
+   transactional   true
+   transactional_propertiesdefault 
+ A masked pattern was here 

Review Comment:
   @ramesh0201 - There are a couple of other classes that distinguish managed 
and external locations but those are irrelevant to the current functionality. 
So I would suggest adding a new method to this class. This new method would use 
an HMS client that is built based on configs as you mentioned in the above .q 
file, and use this client to create a table so that the created table is 
translated to an external table based on the configs, then we can do an assert 
on the table locations.





Issue Time Tracking
---

Worklog Id: (was: 836500)
Time Spent: 1.5h  (was: 1h 20m)

> CTLT with hive.create.as.external.legacy as true creates managed table 
> instead of external table
> 
>
> Key: HIVE-26837
> URL: https://issues.apache.org/jira/browse/HIVE-26837
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When CTLT is used with the config hive.create.as.external.legacy=true, it 
> still creates managed table by default. Use below to reproduce.
> create external table test_ext(empno int, name string) partitioned by(dept 
> string) stored as orc;
> desc formatted test_ext;
> set hive.create.as.external.legacy=true;
> create table test_external like test_ext;
> desc formatted test_external;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-22628) Add locks and transactions tables from sys db to information_schema

2023-01-02 Thread Akshat Mathur (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshat Mathur reassigned HIVE-22628:


Assignee: Akshat Mathur

> Add locks and transactions tables from sys db to information_schema
> ---
>
> Key: HIVE-22628
> URL: https://issues.apache.org/jira/browse/HIVE-22628
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Zoltan Chovan
>Assignee: Akshat Mathur
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26899) Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26899?focusedWorklogId=836495&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836495
 ]

ASF GitHub Bot logged work on HIVE-26899:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 04:42
Start Date: 03/Jan/23 04:42
Worklog Time Spent: 10m 
  Work Description: amanraj2520 commented on PR #3902:
URL: https://github.com/apache/hive/pull/3902#issuecomment-1369390515

   @abstractdog Can you please approve and merge this PR




Issue Time Tracking
---

Worklog Id: (was: 836495)
Time Spent: 0.5h  (was: 20m)

> Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3
> --
>
> Key: HIVE-26899
> URL: https://issues.apache.org/jira/browse/HIVE-26899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?focusedWorklogId=836491&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836491
 ]

ASF GitHub Bot logged work on HIVE-26794:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 04:10
Start Date: 03/Jan/23 04:10
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on code in PR #3817:
URL: https://github.com/apache/hive/pull/3817#discussion_r1060266864


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java:
##
@@ -6188,6 +6188,12 @@ public Connection getConnection(String username, String 
password) throws SQLExce
 connectionProps.setProperty("user", username);
 connectionProps.setProperty("password", password);
 Connection conn = driver.connect(connString, connectionProps);
+String prepareStmt = dbProduct != null ? dbProduct.getPrepareTxnStmt() 
: null;

Review Comment:
   This is used for mysql treating `"` as an identifier quote character: `SET 
@@session.sql_mode=ANSI_QUOTES`,





Issue Time Tracking
---

Worklog Id: (was: 836491)
Time Spent: 4h  (was: 3h 50m)

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?focusedWorklogId=836490&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836490
 ]

ASF GitHub Bot logged work on HIVE-26794:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 04:10
Start Date: 03/Jan/23 04:10
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on code in PR #3817:
URL: https://github.com/apache/hive/pull/3817#discussion_r1060267969


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/datasource/HikariCPDataSourceProvider.java:
##
@@ -72,6 +72,17 @@ public DataSource create(Configuration hdpConfig, int 
maxPoolSize) throws SQLExc
   config.setPoolName(poolName);
 }
 
+// It's kind of a waste to create a fixed size connection pool as same as 
the TxnHandler#connPool,
+// TxnHandler#connPoolMutex is mostly used for MutexAPI that is primarily 
designed to
+// provide coarse-grained mutex support to maintenance tasks running 
inside the Metastore,
+// add minimumIdle=2 and idleTimeout=5min to the pool, so that the 
connection pool can retire
+// the idle connection aggressively, this will make Metastore more 
scalable especially if
+// there is a leader in the warehouse.
+if ("mutex".equals(poolName)) {

Review Comment:
   This literal is used only in creating data source, so in my point of view we 
can use it directly.





Issue Time Tracking
---

Worklog Id: (was: 836490)
Time Spent: 3h 50m  (was: 3h 40m)

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?focusedWorklogId=836487&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836487
 ]

ASF GitHub Bot logged work on HIVE-26794:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 04:05
Start Date: 03/Jan/23 04:05
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on code in PR #3817:
URL: https://github.com/apache/hive/pull/3817#discussion_r1060267118


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/datasource/HikariCPDataSourceProvider.java:
##
@@ -72,6 +72,17 @@ public DataSource create(Configuration hdpConfig, int 
maxPoolSize) throws SQLExc
   config.setPoolName(poolName);
 }
 
+// It's kind of a waste to create a fixed size connection pool as same as 
the TxnHandler#connPool,
+// TxnHandler#connPoolMutex is mostly used for MutexAPI that is primarily 
designed to
+// provide coarse-grained mutex support to maintenance tasks running 
inside the Metastore,
+// add minimumIdle=2 and idleTimeout=5min to the pool, so that the 
connection pool can retire
+// the idle connection aggressively, this will make Metastore more 
scalable especially if
+// there is a leader in the warehouse.
+if ("mutex".equals(poolName)) {
+  config.setMinimumIdle(Math.min(maxPoolSize, 2));
+  config.setIdleTimeout(300 * 1000);

Review Comment:
   Done, this can be configured by `hikaricp.idleTimeout` and default is 10min





Issue Time Tracking
---

Worklog Id: (was: 836487)
Time Spent: 3h 40m  (was: 3.5h)

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?focusedWorklogId=836486&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836486
 ]

ASF GitHub Bot logged work on HIVE-26794:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 04:04
Start Date: 03/Jan/23 04:04
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on code in PR #3817:
URL: https://github.com/apache/hive/pull/3817#discussion_r1060266864


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java:
##
@@ -6188,6 +6188,12 @@ public Connection getConnection(String username, String 
password) throws SQLExce
 connectionProps.setProperty("user", username);
 connectionProps.setProperty("password", password);
 Connection conn = driver.connect(connString, connectionProps);
+String prepareStmt = dbProduct != null ? dbProduct.getPrepareTxnStmt() 
: null;

Review Comment:
   This is used for support mysql treating `"` as an identifier quote 
character: `SET @@session.sql_mode=ANSI_QUOTES`,





Issue Time Tracking
---

Worklog Id: (was: 836486)
Time Spent: 3.5h  (was: 3h 20m)

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?focusedWorklogId=836485&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836485
 ]

ASF GitHub Bot logged work on HIVE-26794:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 04:01
Start Date: 03/Jan/23 04:01
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on code in PR #3817:
URL: https://github.com/apache/hive/pull/3817#discussion_r1060266144


##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/datasource/HikariCPDataSourceProvider.java:
##
@@ -72,6 +72,17 @@ public DataSource create(Configuration hdpConfig, int 
maxPoolSize) throws SQLExc
   config.setPoolName(poolName);
 }
 
+// It's kind of a waste to create a fixed size connection pool as same as 
the TxnHandler#connPool,
+// TxnHandler#connPoolMutex is mostly used for MutexAPI that is primarily 
designed to
+// provide coarse-grained mutex support to maintenance tasks running 
inside the Metastore,
+// add minimumIdle=2 and idleTimeout=5min to the pool, so that the 
connection pool can retire
+// the idle connection aggressively, this will make Metastore more 
scalable especially if
+// there is a leader in the warehouse.
+if ("mutex".equals(poolName)) {
+  config.setMinimumIdle(Math.min(maxPoolSize, 2));

Review Comment:
   Done, and added a test: 
https://github.com/apache/hive/pull/3817/files#diff-21013298e701863201669c0a08dce28bff302bca654dda0287b76cb344ae2c7a





Issue Time Tracking
---

Worklog Id: (was: 836485)
Time Spent: 3h 20m  (was: 3h 10m)

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26794) Explore retiring TxnHandler#connPoolMutex idle connections

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26794?focusedWorklogId=836479&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836479
 ]

ASF GitHub Bot logged work on HIVE-26794:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 02:41
Start Date: 03/Jan/23 02:41
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3817:
URL: https://github.com/apache/hive/pull/3817#issuecomment-1369339656

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3817)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3817&resolved=false&types=BUG)
 
[![B](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/B-16px.png
 
'B')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3817&resolved=false&types=BUG)
 [2 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3817&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3817&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3817&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3817&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3817&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3817&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3817&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3817&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3817&resolved=false&types=CODE_SMELL)
 [1 Code 
Smell](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3817&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3817&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3817&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 836479)
Time Spent: 3h 10m  (was: 3h)

> Explore retiring TxnHandler#connPoolMutex idle connections
> --
>
> Key: HIVE-26794
> URL: https://issues.apache.org/jira/browse/HIVE-26794
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Instead of creating a fixed size connection pool for TxnHandler#MutexAPI, the 
> pool can be assigned to a more dynamic size pool due to: 
>  * TxnHandler#MutexAPI is primarily designed to provide coarse-grained mutex 
> support to maintenance tasks running inside the Metastore, these tasks are 
> not user faced;
>  * A fixed size connection pool as same as the pool used in ObjectStore is a 
> waste for other non leaders in the warehouse; 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-25790) Make managed table copies handle updates (FileUtils)

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25790?focusedWorklogId=836476&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836476
 ]

ASF GitHub Bot logged work on HIVE-25790:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 00:19
Start Date: 03/Jan/23 00:19
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on PR #3582:
URL: https://github.com/apache/hive/pull/3582#issuecomment-1369287261

   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.




Issue Time Tracking
---

Worklog Id: (was: 836476)
Time Spent: 1.5h  (was: 1h 20m)

> Make managed table copies handle updates (FileUtils)
> 
>
> Key: HIVE-25790
> URL: https://issues.apache.org/jira/browse/HIVE-25790
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Haymant Mangla
>Assignee: Teddy Choi
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26693) HS2 can not read/write hive_catalog iceberg table created by other engines

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26693?focusedWorklogId=836474&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836474
 ]

ASF GitHub Bot logged work on HIVE-26693:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 00:19
Start Date: 03/Jan/23 00:19
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on PR #3726:
URL: https://github.com/apache/hive/pull/3726#issuecomment-1369287186

   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.




Issue Time Tracking
---

Worklog Id: (was: 836474)
Time Spent: 50m  (was: 40m)

> HS2 can not read/write hive_catalog iceberg table created by other engines
> --
>
> Key: HIVE-26693
> URL: https://issues.apache.org/jira/browse/HIVE-26693
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, StorageHandler
>Affects Versions: 4.0.0-alpha-2
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Step to reproduce:
>  # Create hive_catalog iceberg table by Trino/Presto/Flink (Spark with 
> _iceberg.engine.hive.enabled_ disabled)
>  # show table info with hive beeline:
> {code:java}
> ++
> |                   createtab_stmt                   |
> ++
> | CREATE EXTERNAL TABLE `iceberg_hive`.`testtrinoice`( |
> |   `id` int)                                        |
> | ROW FORMAT SERDE                                   |
> |   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'  |
> | STORED AS INPUTFORMAT                              |
> |   'org.apache.hadoop.mapred.FileInputFormat'       |
> | OUTPUTFORMAT                                       |
> |   'org.apache.hadoop.mapred.FileOutputFormat'      |
> | LOCATION                                           |
> |   
> 'hdfs://localhost:8020/iceberg_hive.db/testtrinoice-08642c05e622415ab3e2da4b4c35224d'
>  |
> | TBLPROPERTIES (                                    |
> |   
> 'metadata_location'='hdfs://localhost:8020/iceberg_hive.db/testtrinoice-08642c05e622415ab3e2da4b4c35224d/metadata/0-3303dd99-e4d1-4cb0-9d12-9744cbe0a1c9.metadata.json',
>   |
> |   'table_type'='iceberg',                          |
> |   'transient_lastDdlTime'='1667292082')            |
> ++
> {code}
> You can see that the iceberg table created by trino has no iceberg 
> inputformat/outputformat which is used to read/write iceberg data for HS2.
>  # Query this iceberg table with HS2:
> {code:java}
> select * from iceberg_hive.testtrinoice; {code}
>  
> {code:java}
> ERROR : Failed with exception java.io.IOException:java.io.IOException: Cannot 
> create an instance of InputFormat class 
> org.apache.hadoop.mapred.FileInputFormat as specified in mapredWork!
> java.io.IOException: java.io.IOException: Cannot create an instance of 
> InputFormat class org.apache.hadoop.mapred.FileInputFormat as specified in 
> mapredWork!
>         at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:624)
>         at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:531)
>         at 
> org.apache.hadoop.hive.ql.exec.FetchTask.executeInner(FetchTask.java:197)
>         at org.apache.hadoop.hive.ql.exec.FetchTask.execute(FetchTask.java:98)
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:212)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:154)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149)
>         at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:185)
>         at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:234)
>         at 
> org.apache.hive.service.cli.operation.SQLOperation.access$500(SQLOperation.java:88)
>         at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:337)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>         at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run

[jira] [Work logged] (HIVE-26625) Upgrade jackson-databind to 2.13.3 due to critical CVEs

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26625?focusedWorklogId=836475&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836475
 ]

ASF GitHub Bot logged work on HIVE-26625:
-

Author: ASF GitHub Bot
Created on: 03/Jan/23 00:19
Start Date: 03/Jan/23 00:19
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on PR #3671:
URL: https://github.com/apache/hive/pull/3671#issuecomment-1369287243

   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.




Issue Time Tracking
---

Worklog Id: (was: 836475)
Time Spent: 1h 10m  (was: 1h)

> Upgrade jackson-databind to 2.13.3 due to critical CVEs
> ---
>
> Key: HIVE-26625
> URL: https://issues.apache.org/jira/browse/HIVE-26625
> Project: Hive
>  Issue Type: Task
>Reporter: Devaspati Krishnatri
>Assignee: Devaspati Krishnatri
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26840) Backport of HIVE-23073 and HIVE-24138

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26840?focusedWorklogId=836450&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836450
 ]

ASF GitHub Bot logged work on HIVE-26840:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 20:09
Start Date: 02/Jan/23 20:09
Worklog Time Spent: 10m 
  Work Description: amanraj2520 commented on PR #3859:
URL: https://github.com/apache/hive/pull/3859#issuecomment-1369179860

   Raised this PR to track the arrow upgrade 
https://github.com/apache/hive/pull/3902
   
   cc @abstractdog @cnauroth 




Issue Time Tracking
---

Worklog Id: (was: 836450)
Time Spent: 4h 40m  (was: 4.5h)

> Backport of HIVE-23073 and HIVE-24138
> -
>
> Key: HIVE-26840
> URL: https://issues.apache.org/jira/browse/HIVE-26840
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26899) Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26899?focusedWorklogId=836449&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836449
 ]

ASF GitHub Bot logged work on HIVE-26899:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 20:08
Start Date: 02/Jan/23 20:08
Worklog Time Spent: 10m 
  Work Description: amanraj2520 commented on PR #3902:
URL: https://github.com/apache/hive/pull/3902#issuecomment-1369179062

   @abstractdog raised this PR for arrow upgrade as discussed in this 
https://github.com/apache/hive/pull/3859. We will first merge this to branch-3 
and then go ahead with the netty upgrade and then 
[HIVE-26892](https://issues.apache.org/jira/browse/HIVE-26892)
   
   cc @cnauroth 




Issue Time Tracking
---

Worklog Id: (was: 836449)
Time Spent: 20m  (was: 10m)

> Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3
> --
>
> Key: HIVE-26899
> URL: https://issues.apache.org/jira/browse/HIVE-26899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26899) Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26899?focusedWorklogId=836448&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836448
 ]

ASF GitHub Bot logged work on HIVE-26899:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 20:07
Start Date: 02/Jan/23 20:07
Worklog Time Spent: 10m 
  Work Description: amanraj2520 opened a new pull request, #3902:
URL: https://github.com/apache/hive/pull/3902

   Please refer to this JIRA : https://issues.apache.org/jira/browse/HIVE-26899
   
   
   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   
   
   ### How was this patch tested?
   
   




Issue Time Tracking
---

Worklog Id: (was: 836448)
Remaining Estimate: 0h
Time Spent: 10m

> Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3
> --
>
> Key: HIVE-26899
> URL: https://issues.apache.org/jira/browse/HIVE-26899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26899) Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26899:
--
Labels: pull-request-available  (was: )

> Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3
> --
>
> Key: HIVE-26899
> URL: https://issues.apache.org/jira/browse/HIVE-26899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26899) Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3

2023-01-02 Thread Aman Raj (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Raj updated HIVE-26899:

Summary: Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3 
 (was: Upgrade arrow to 0.11.0 in branch-3)

> Backport HIVE-20751 HIVE-23987 Upgrade arrow to 0.11.0 in branch-3
> --
>
> Key: HIVE-26899
> URL: https://issues.apache.org/jira/browse/HIVE-26899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26899) Upgrade arrow to 0.11.0 in branch-3

2023-01-02 Thread Aman Raj (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Raj reassigned HIVE-26899:
---


> Upgrade arrow to 0.11.0 in branch-3
> ---
>
> Key: HIVE-26899
> URL: https://issues.apache.org/jira/browse/HIVE-26899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26840) Backport of HIVE-23073 and HIVE-24138

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26840?focusedWorklogId=836444&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836444
 ]

ASF GitHub Bot logged work on HIVE-26840:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 19:59
Start Date: 02/Jan/23 19:59
Worklog Time Spent: 10m 
  Work Description: amanraj2520 commented on PR #3859:
URL: https://github.com/apache/hive/pull/3859#issuecomment-1369173277

   @abstractdog Yes me and @cnauroth have verified it on our local machine by 
merging this and the 
[HIVE-26892](https://issues.apache.org/jira/browse/HIVE-26892) branch. The 
tests pass. I agree that we can upgrade arrow first on branch-3 and then 
re-visit this PR only for netty. I had just raised it in the same branch, so 
that we could establish the fact that arrow and netty will have to go in for a 
green branch-3. Since now you are aware of the test failures, it would became 
easier to explain. Will raise the arrow upgrade PR in sometime and let you know.




Issue Time Tracking
---

Worklog Id: (was: 836444)
Time Spent: 4.5h  (was: 4h 20m)

> Backport of HIVE-23073 and HIVE-24138
> -
>
> Key: HIVE-26840
> URL: https://issues.apache.org/jira/browse/HIVE-26840
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26840) Backport of HIVE-23073 and HIVE-24138

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26840?focusedWorklogId=836443&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836443
 ]

ASF GitHub Bot logged work on HIVE-26840:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 19:53
Start Date: 02/Jan/23 19:53
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on PR #3859:
URL: https://github.com/apache/hive/pull/3859#issuecomment-1369171290

   is this whole this working by adding 
[HIVE-26892](https://issues.apache.org/jira/browse/HIVE-26892) + having Netty 
4.1.69.Final and arrow to 0.11.0?
   if so, is there a chance to backport arrow upgrade separately first if it 
was done separately on master?




Issue Time Tracking
---

Worklog Id: (was: 836443)
Time Spent: 4h 20m  (was: 4h 10m)

> Backport of HIVE-23073 and HIVE-24138
> -
>
> Key: HIVE-26840
> URL: https://issues.apache.org/jira/browse/HIVE-26840
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26890) Disable TestSSL#testConnectionWrongCertCN (Done as part of HIVE-22621 in master)

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26890?focusedWorklogId=836442&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836442
 ]

ASF GitHub Bot logged work on HIVE-26890:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 19:49
Start Date: 02/Jan/23 19:49
Worklog Time Spent: 10m 
  Work Description: amanraj2520 commented on PR #3895:
URL: https://github.com/apache/hive/pull/3895#issuecomment-1369170405

   @abstractdog I agree with you on the same. Updated the description.




Issue Time Tracking
---

Worklog Id: (was: 836442)
Time Spent: 50m  (was: 40m)

> Disable TestSSL#testConnectionWrongCertCN (Done as part of HIVE-22621 in 
> master)
> 
>
> Key: HIVE-26890
> URL: https://issues.apache.org/jira/browse/HIVE-26890
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestSSL fails with the following error (this happens in the Hive-3.1.3 
> release also, so disabling this test) :
> {code:java}
> [ERROR] Tests run: 10, Failures: 1, Errors: 0, Skipped: 2, Time elapsed: 
> 23.143 s <<< FAILURE! - in org.apache.hive.jdbc.TestSSL
> [ERROR] testConnectionWrongCertCN(org.apache.hive.jdbc.TestSSL)  Time 
> elapsed: 0.64 s  <<< FAILURE!
> java.lang.AssertionError
>         at org.junit.Assert.fail(Assert.java:86)
>         at org.junit.Assert.assertTrue(Assert.java:41)
>         at org.junit.Assert.assertTrue(Assert.java:52)
>         at 
> org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN(TestSSL.java:408)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>         at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>         at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26890) Disable TestSSL#testConnectionWrongCertCN (Done as part of HIVE-22621 in master)

2023-01-02 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-26890:

Summary: Disable TestSSL#testConnectionWrongCertCN (Done as part of 
HIVE-22621 in master)  (was: Disable TestSSL (Done as part of HIVE-22621 in 
oss/master))

> Disable TestSSL#testConnectionWrongCertCN (Done as part of HIVE-22621 in 
> master)
> 
>
> Key: HIVE-26890
> URL: https://issues.apache.org/jira/browse/HIVE-26890
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestSSL fails with the following error (this happens in the Hive-3.1.3 
> release also, so disabling this test) :
> {code:java}
> [ERROR] Tests run: 10, Failures: 1, Errors: 0, Skipped: 2, Time elapsed: 
> 23.143 s <<< FAILURE! - in org.apache.hive.jdbc.TestSSL
> [ERROR] testConnectionWrongCertCN(org.apache.hive.jdbc.TestSSL)  Time 
> elapsed: 0.64 s  <<< FAILURE!
> java.lang.AssertionError
>         at org.junit.Assert.fail(Assert.java:86)
>         at org.junit.Assert.assertTrue(Assert.java:41)
>         at org.junit.Assert.assertTrue(Assert.java:52)
>         at 
> org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN(TestSSL.java:408)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>         at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>         at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26890) Disable TestSSL (Done as part of HIVE-22621 in oss/master)

2023-01-02 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-26890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-26890:

Summary: Disable TestSSL (Done as part of HIVE-22621 in oss/master)  (was: 
Disable TestSSL (Done as part of HIVE-21456 in oss/master))

> Disable TestSSL (Done as part of HIVE-22621 in oss/master)
> --
>
> Key: HIVE-26890
> URL: https://issues.apache.org/jira/browse/HIVE-26890
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestSSL fails with the following error (this happens in the Hive-3.1.3 
> release also, so disabling this test) :
> {code:java}
> [ERROR] Tests run: 10, Failures: 1, Errors: 0, Skipped: 2, Time elapsed: 
> 23.143 s <<< FAILURE! - in org.apache.hive.jdbc.TestSSL
> [ERROR] testConnectionWrongCertCN(org.apache.hive.jdbc.TestSSL)  Time 
> elapsed: 0.64 s  <<< FAILURE!
> java.lang.AssertionError
>         at org.junit.Assert.fail(Assert.java:86)
>         at org.junit.Assert.assertTrue(Assert.java:41)
>         at org.junit.Assert.assertTrue(Assert.java:52)
>         at 
> org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN(TestSSL.java:408)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>         at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>         at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26890) Disable TestSSL (Done as part of HIVE-21456 in oss/master)

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26890?focusedWorklogId=836440&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836440
 ]

ASF GitHub Bot logged work on HIVE-26890:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 19:45
Start Date: 02/Jan/23 19:45
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on PR #3895:
URL: https://github.com/apache/hive/pull/3895#issuecomment-1369168970

   I think this has nothing to do with HIVE-21456 itself, HIVE-21456 only 
changed the class level @Ignore to method level
   
https://github.com/apache/hive/commit/b7da71856b1bb51af68a5ba6890b65f9843f3606
   the original disable commit was in the scope of HIVE-22620 (fix) / 
HIVE-22621 (disable)
   
   for clarity's sake let's call this "Disable 
TestSSL#testConnectionWrongCertCN" and refer to 
[HIVE-22621](https://issues.apache.org/jira/browse/HIVE-22621)
   other than that, looks good to me
   




Issue Time Tracking
---

Worklog Id: (was: 836440)
Time Spent: 40m  (was: 0.5h)

> Disable TestSSL (Done as part of HIVE-21456 in oss/master)
> --
>
> Key: HIVE-26890
> URL: https://issues.apache.org/jira/browse/HIVE-26890
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestSSL fails with the following error (this happens in the Hive-3.1.3 
> release also, so disabling this test) :
> {code:java}
> [ERROR] Tests run: 10, Failures: 1, Errors: 0, Skipped: 2, Time elapsed: 
> 23.143 s <<< FAILURE! - in org.apache.hive.jdbc.TestSSL
> [ERROR] testConnectionWrongCertCN(org.apache.hive.jdbc.TestSSL)  Time 
> elapsed: 0.64 s  <<< FAILURE!
> java.lang.AssertionError
>         at org.junit.Assert.fail(Assert.java:86)
>         at org.junit.Assert.assertTrue(Assert.java:41)
>         at org.junit.Assert.assertTrue(Assert.java:52)
>         at 
> org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN(TestSSL.java:408)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>         at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>         at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26868) Iceberg: Allow IOW on empty table with Partition Evolution

2023-01-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653676#comment-17653676
 ] 

Ayush Saxena commented on HIVE-26868:
-

Committed to master.

Thanx [~dkuzmenko] for the review!!!

> Iceberg: Allow IOW on empty table with Partition Evolution
> --
>
> Key: HIVE-26868
> URL: https://issues.apache.org/jira/browse/HIVE-26868
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In case an iceberg table has gone through partition evolution, we don't allow 
> an IOW operation on it.
> But if it is empty, we can allow an IOW since there ain't any data which can 
> get messed by overwrite.
> This helps to compact data, & merge the delete files into data file
> via
> Truncate -> IOW with Snapshot ID before Truncate.
> Same flow is used by Impala for compacting Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26868) Iceberg: Allow IOW on empty table with Partition Evolution

2023-01-02 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HIVE-26868.
-
Fix Version/s: 4.0.0
   Resolution: Fixed

> Iceberg: Allow IOW on empty table with Partition Evolution
> --
>
> Key: HIVE-26868
> URL: https://issues.apache.org/jira/browse/HIVE-26868
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In case an iceberg table has gone through partition evolution, we don't allow 
> an IOW operation on it.
> But if it is empty, we can allow an IOW since there ain't any data which can 
> get messed by overwrite.
> This helps to compact data, & merge the delete files into data file
> via
> Truncate -> IOW with Snapshot ID before Truncate.
> Same flow is used by Impala for compacting Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26868) Iceberg: Allow IOW on empty table with Partition Evolution

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26868?focusedWorklogId=836428&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836428
 ]

ASF GitHub Bot logged work on HIVE-26868:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 18:15
Start Date: 02/Jan/23 18:15
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged PR #3872:
URL: https://github.com/apache/hive/pull/3872




Issue Time Tracking
---

Worklog Id: (was: 836428)
Time Spent: 1h 10m  (was: 1h)

> Iceberg: Allow IOW on empty table with Partition Evolution
> --
>
> Key: HIVE-26868
> URL: https://issues.apache.org/jira/browse/HIVE-26868
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In case an iceberg table has gone through partition evolution, we don't allow 
> an IOW operation on it.
> But if it is empty, we can allow an IOW since there ain't any data which can 
> get messed by overwrite.
> This helps to compact data, & merge the delete files into data file
> via
> Truncate -> IOW with Snapshot ID before Truncate.
> Same flow is used by Impala for compacting Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26054) Distinct + Groupby with column alias is failing

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26054?focusedWorklogId=836418&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836418
 ]

ASF GitHub Bot logged work on HIVE-26054:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 16:32
Start Date: 02/Jan/23 16:32
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3891:
URL: https://github.com/apache/hive/pull/3891#issuecomment-1369079107

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3891)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3891&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3891&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3891&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=CODE_SMELL)
 [4 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3891&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3891&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 836418)
Time Spent: 1h 50m  (was: 1h 40m)

> Distinct + Groupby with column alias is failing
> ---
>
> Key: HIVE-26054
> URL: https://issues.apache.org/jira/browse/HIVE-26054
> Project: Hive
>  Issue Type: Bug
>Reporter: Naresh P R
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> After [HIVE-16924|https://issues.apache.org/jira/browse/HIVE-16924], below 
> query is failing.
> {code:java}
> create table table1 (col1 bigint, col2 string);
> create table table2 (t2_col1 string);
> Select distinct col1 as alias_col1
> from table1
> where col2 = (SELECT max(t2_col1) as currentdate from table2 limit 1)
> order by col1;
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Unsupported SubQuery Expression '1': Only SubQuery expressions that are top 
> level conjuncts are allowed (state=42000,code=4) {code}
> Workaround is either remove distinct column alias "alias_col1" 

[jira] [Work logged] (HIVE-26793) Create a new configuration to override "no compaction" for tables

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26793?focusedWorklogId=836413&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836413
 ]

ASF GitHub Bot logged work on HIVE-26793:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 15:38
Start Date: 02/Jan/23 15:38
Worklog Time Spent: 10m 
  Work Description: veghlaci05 commented on code in PR #3822:
URL: https://github.com/apache/hive/pull/3822#discussion_r1060097452


##
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java:
##
@@ -1143,15 +1143,26 @@ public static TableName getTableNameFor(Table table) {
   /**
* Because TABLE_NO_AUTO_COMPACT was originally assumed to be 
NO_AUTO_COMPACT and then was moved
* to no_auto_compact, we need to check it in both cases.
+   * Check the database level no_auto_compact , if present it is given 
priority else table level no_auto_compact is considered.
*/
-  public static boolean isNoAutoCompactSet(Map parameters) {
-String noAutoCompact =
-parameters.get(hive_metastoreConstants.TABLE_NO_AUTO_COMPACT);
+  public static boolean isNoAutoCompactSet(Map dbParameters, 
Map tblParameters) {
+String dbNoAutoCompact = getNoAutoCompact(dbParameters);
+if (dbNoAutoCompact == null) {
+  LOG.debug("Using table configuration '" + 
hive_metastoreConstants.TABLE_NO_AUTO_COMPACT + "' for compaction");

Review Comment:
   Since this config now can be both table or DB level, the constant name 
should be changed to simply NO_AUTO_COMPACT.





Issue Time Tracking
---

Worklog Id: (was: 836413)
Time Spent: 1.5h  (was: 1h 20m)

> Create a new configuration to override "no compaction" for tables
> -
>
> Key: HIVE-26793
> URL: https://issues.apache.org/jira/browse/HIVE-26793
> Project: Hive
>  Issue Type: Improvement
>Reporter: Kokila N
>Assignee: Kokila N
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently a simple user can create a table with 
> {color:#6a8759}no_auto_compaction=true{color} table property and create an 
> aborted write transaction writing to this table. This way a malicious user 
> can prevent cleaning up data for the aborted transaction, creating 
> performance degradation.
> This configuration should be allowed to overridden on a database level: 
> adding {color:#6a8759}no_auto_compaction=false{color} should override the 
> table level setting forcing the initiator to schedule compaction for all 
> tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26822) Port changes before spotlessApply

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26822?focusedWorklogId=836408&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836408
 ]

ASF GitHub Bot logged work on HIVE-26822:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 15:15
Start Date: 02/Jan/23 15:15
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3857:
URL: https://github.com/apache/hive/pull/3857#issuecomment-1369027342

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3857)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3857&resolved=false&types=BUG)
 
[![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png
 
'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3857&resolved=false&types=BUG)
 [1 
Bug](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3857&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3857&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3857&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3857&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3857&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3857&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3857&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3857&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3857&resolved=false&types=CODE_SMELL)
 [2 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3857&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3857&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3857&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 836408)
Time Spent: 1h 10m  (was: 1h)

> Port changes before spotlessApply
> -
>
> Key: HIVE-26822
> URL: https://issues.apache.org/jira/browse/HIVE-26822
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zsolt Miskolczi
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26716) Query based Rebalance compaction on full acid tables

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26716?focusedWorklogId=836404&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836404
 ]

ASF GitHub Bot logged work on HIVE-26716:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 14:28
Start Date: 02/Jan/23 14:28
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3746:
URL: https://github.com/apache/hive/pull/3746#issuecomment-1368993331

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3746)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3746&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3746&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3746&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3746&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3746&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3746&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3746&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3746&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3746&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3746&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3746&resolved=false&types=CODE_SMELL)
 [87 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3746&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3746&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3746&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 836404)
Time Spent: 17h 20m  (was: 17h 10m)

> Query based Rebalance compaction on full acid tables
> 
>
> Key: HIVE-26716
> URL: https://issues.apache.org/jira/browse/HIVE-26716
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: László Végh
>Assignee: László Végh
>Priority: Major
>  Labels: ACID, compaction, pull-request-available
>  Time Spent: 17h 20m
>  Remaining Estimate: 0h
>
> Support rebalancing compaction on fully ACID tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26658) INT64 Parquet timestamps cannot be mapped to most Hive numeric types

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26658?focusedWorklogId=836403&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836403
 ]

ASF GitHub Bot logged work on HIVE-26658:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 14:27
Start Date: 02/Jan/23 14:27
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3698:
URL: https://github.com/apache/hive/pull/3698#issuecomment-1368992760

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3698)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3698&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3698&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3698&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3698&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3698&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3698&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3698&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3698&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3698&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3698&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3698&resolved=false&types=CODE_SMELL)
 [0 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3698&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3698&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3698&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 836403)
Time Spent: 1h  (was: 50m)

> INT64 Parquet timestamps cannot be mapped to most Hive numeric types
> 
>
> Key: HIVE-26658
> URL: https://issues.apache.org/jira/browse/HIVE-26658
> Project: Hive
>  Issue Type: Bug
>  Components: Parquet, Serializers/Deserializers
>Affects Versions: 4.0.0-alpha-1
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Minor
>  Labels: backwards-compatibility, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When attempting to read a Parquet file with column of primitive type INT64 
> and logical type 
> [TIMESTAMP|https://github.com/apache/parquet-format/blob/54e53e5d7794d383529dd30746378f19a12afd58/LogicalTypes.md?plain=1#L337]
>  an error is raised when the Hive type is different from TIMESTAMP and BIGINT.
> Consider a Parquet file (e.g., ts_file.parquet) with the following schema:
> {code:json}
> {
>   "name": "eventtime",
>   "type": ["null", {
> "typ

[jira] [Work logged] (HIVE-26804) Cancel Compactions in initiated state

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26804?focusedWorklogId=836397&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836397
 ]

ASF GitHub Bot logged work on HIVE-26804:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 13:07
Start Date: 02/Jan/23 13:07
Worklog Time Spent: 10m 
  Work Description: veghlaci05 commented on code in PR #3880:
URL: https://github.com/apache/hive/pull/3880#discussion_r1060001094


##
ql/src/java/org/apache/hadoop/hive/ql/ddl/process/abort/compaction/AbortCompactionsOperation.java:
##
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.ddl.process.abort.compaction;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.metastore.api.AbortCompactResponse;
+import org.apache.hadoop.hive.metastore.api.AbortCompactionRequest;
+import org.apache.hadoop.hive.metastore.api.AbortCompactionResponseElement;
+import org.apache.hadoop.hive.ql.ddl.DDLOperation;
+import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+
+/**
+ * Operation process of aborting compactions.
+ */
+public class AbortCompactionsOperation extends 
DDLOperation {
+public AbortCompactionsOperation(DDLOperationContext context, 
AbortCompactionsDesc desc) {
+super(context, desc);
+}
+
+@Override
+public int execute() throws HiveException {
+AbortCompactionRequest request = new AbortCompactionRequest();
+request.setCompactionIds(desc.getCompactionIds());
+AbortCompactResponse response = 
context.getDb().abortCompactions(request);
+try (DataOutputStream os = ShowUtils.getOutputStream(new 
Path(desc.getResFile()), context)) {
+writeHeader(os);
+if (response.getAbortedcompacts() != null) {
+for (AbortCompactionResponseElement e : 
response.getAbortedcompacts()) {
+writeRow(os, e);
+}
+}
+} catch (Exception e) {
+LOG.warn("show compactions: ", e);

Review Comment:
   Should be "abort compactions"



##
standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift:
##
@@ -1393,6 +1393,22 @@ struct ShowCompactResponse {
 1: required list compacts,
 }
 
+struct AbortCompactionRequest {
+1: required list compactionIds,
+2: optional string type,
+3: optional string poolName
+}
+
+struct AbortCompactionResponseElement {
+1: required i64 compactionIds,

Review Comment:
   Should be compactionId



##
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java:
##
@@ -6242,4 +6246,86 @@ public boolean isWrapperFor(Class iface) throws 
SQLException {
 }
   }
 
+  @Override
+  @RetrySemantics.SafeToRetry
+  public AbortCompactResponse abortCompactions(AbortCompactionRequest reqst) 
throws MetaException, NoSuchCompactionException {
+AbortCompactResponse response = new AbortCompactResponse(new 
ArrayList<>());
+List requestedCompId = reqst.getCompactionIds();
+if (requestedCompId.isEmpty()) {
+  LOG.info("Compaction ids missing in request. No compactions to abort");
+  throw new NoSuchCompactionException("ompaction ids missing in request. 
No compactions to abort");
+}
+List abortCompactionResponseElementList = 
new ArrayList<>();
+for (int i = 0; i < requestedCompId.size(); i++) {
+  AbortCompactionResponseElement responseEle = 
abortCompaction(requestedCompId.get(i));
+  abortCompactionResponseElementList.add(responseEle);
+}
+response.setAbortedcompacts(abortCompactionResponseElementList);
+return response;
+  }
+
+  @RetrySemantics.SafeToRetry
+  public AbortCompactionResponseElement abortCompaction(Long compId) throws 
MetaException {
+try {
+  AbortCompactionResponseElement responseEle = new 
AbortCompactionRespons

[jira] [Work logged] (HIVE-26054) Distinct + Groupby with column alias is failing

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26054?focusedWorklogId=836394&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836394
 ]

ASF GitHub Bot logged work on HIVE-26054:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 12:36
Start Date: 02/Jan/23 12:36
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3891:
URL: https://github.com/apache/hive/pull/3891#issuecomment-1368911493

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3891)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3891&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3891&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3891&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=CODE_SMELL)
 [2 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3891&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3891&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3891&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 836394)
Time Spent: 1h 40m  (was: 1.5h)

> Distinct + Groupby with column alias is failing
> ---
>
> Key: HIVE-26054
> URL: https://issues.apache.org/jira/browse/HIVE-26054
> Project: Hive
>  Issue Type: Bug
>Reporter: Naresh P R
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> After [HIVE-16924|https://issues.apache.org/jira/browse/HIVE-16924], below 
> query is failing.
> {code:java}
> create table table1 (col1 bigint, col2 string);
> create table table2 (t2_col1 string);
> Select distinct col1 as alias_col1
> from table1
> where col2 = (SELECT max(t2_col1) as currentdate from table2 limit 1)
> order by col1;
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Unsupported SubQuery Expression '1': Only SubQuery expressions that are top 
> level conjuncts are allowed (state=42000,code=4) {code}
> Workaround is either remove distinct column alias "alias_col1" or

[jira] [Work logged] (HIVE-26868) Iceberg: Allow IOW on empty table with Partition Evolution

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26868?focusedWorklogId=836393&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836393
 ]

ASF GitHub Bot logged work on HIVE-26868:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 12:07
Start Date: 02/Jan/23 12:07
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3872:
URL: https://github.com/apache/hive/pull/3872#issuecomment-1368890549

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3872)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3872&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3872&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3872&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3872&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3872&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3872&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3872&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3872&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3872&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3872&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3872&resolved=false&types=CODE_SMELL)
 [0 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3872&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3872&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3872&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 836393)
Time Spent: 1h  (was: 50m)

> Iceberg: Allow IOW on empty table with Partition Evolution
> --
>
> Key: HIVE-26868
> URL: https://issues.apache.org/jira/browse/HIVE-26868
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In case an iceberg table has gone through partition evolution, we don't allow 
> an IOW operation on it.
> But if it is empty, we can allow an IOW since there ain't any data which can 
> get messed by overwrite.
> This helps to compact data, & merge the delete files into data file
> via
> Truncate -> IOW with Snapshot ID before Truncate.
> Same flow is used by Impala for compacting Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26658) INT64 Parquet timestamps cannot be mapped to most Hive numeric types

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26658?focusedWorklogId=836392&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836392
 ]

ASF GitHub Bot logged work on HIVE-26658:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 11:51
Start Date: 02/Jan/23 11:51
Worklog Time Spent: 10m 
  Work Description: zabetak commented on PR #3698:
URL: https://github.com/apache/hive/pull/3698#issuecomment-136889

   Thanks for the review @cnauroth ! Indeed I changed the order of the 
statements in the .q file just before opening the PR and forgot to update the 
respective .q.out.
   
   I rebased the PR against latest master and updated the stale .q.out file so 
I am hoping that now all tests will come back green.




Issue Time Tracking
---

Worklog Id: (was: 836392)
Time Spent: 50m  (was: 40m)

> INT64 Parquet timestamps cannot be mapped to most Hive numeric types
> 
>
> Key: HIVE-26658
> URL: https://issues.apache.org/jira/browse/HIVE-26658
> Project: Hive
>  Issue Type: Bug
>  Components: Parquet, Serializers/Deserializers
>Affects Versions: 4.0.0-alpha-1
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Minor
>  Labels: backwards-compatibility, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When attempting to read a Parquet file with column of primitive type INT64 
> and logical type 
> [TIMESTAMP|https://github.com/apache/parquet-format/blob/54e53e5d7794d383529dd30746378f19a12afd58/LogicalTypes.md?plain=1#L337]
>  an error is raised when the Hive type is different from TIMESTAMP and BIGINT.
> Consider a Parquet file (e.g., ts_file.parquet) with the following schema:
> {code:json}
> {
>   "name": "eventtime",
>   "type": ["null", {
> "type": "long",
> "logicalType": "timestamp-millis"
>   }],
>   "default": null
> }
> {code}
>  
> Mapping the column to a Hive numeric type among TINYINT, SMALLINT, INT, 
> FLOAT, DOUBLE, DECIMAL, and trying to run a SELECT will give back an error.
> The following snippet can be used to reproduce the problem.
> {code:sql}
> CREATE TABLE ts_table (eventtime INT) STORED AS PARQUET;
> LOAD DATA LOCAL INPATH 'ts_file.parquet' into table ts_table;
> SELECT * FROM ts_table;
> {code}
> This is a regression caused by HIVE-21215. Although, HIVE-21215 allows to 
> read INT64 types as Hive TIMESTAMP, which was not possible before, at the 
> same time it broke the mapping to every other Hive numeric type. The problem 
> was addressed selectively for BIGINT type very recently (HIVE-26612).
> The primary goal of this ticket is to restore backward compatibility since 
> these use-cases were working before HIVE-21215.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26880) Upgrade Apache Directory Server to 1.5.7 for release 3.2.

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26880?focusedWorklogId=836389&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836389
 ]

ASF GitHub Bot logged work on HIVE-26880:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 11:09
Start Date: 02/Jan/23 11:09
Worklog Time Spent: 10m 
  Work Description: zabetak commented on code in PR #3886:
URL: https://github.com/apache/hive/pull/3886#discussion_r1059967623


##
service/pom.xml:
##
@@ -291,13 +291,6 @@
   test
 
 
-
-  org.apache.directory.client.ldap
-  ldap-client-api
-  ${apache-directory-clientapi.version}
-  test
-
-
 
   org.apache.directory.server
   apacheds-server-integ

Review Comment:
   Can you clarify why we don't want to exclude `ldap-client-api` from here as 
it was done in the original commit in master 
(https://github.com/apache/hive/commit/5581eb8a74a4f33b35b7bf70d9ec4e9a95f3b8a0).
   
   When I run `mvn dependency:tree` in service module I see the 
`org.apache.directory.client.ldap:ldap-client-api` dependency coming 
transitively.
   ```
   [INFO] +- org.apache.directory.server:apacheds-server-integ:jar:1.5.7:test
   [INFO] |  +- 
org.apache.directory.server:apacheds-interceptor-kerberos:jar:1.5.7:test
   [INFO] |  |  +- org.apache.directory.server:apacheds-core:jar:1.5.7:test
   [INFO] |  |  |  +- 
org.apache.directory.server:apacheds-core-api:jar:1.5.7:test
   [INFO] |  |  |  |  +- 
org.apache.directory.server:apacheds-core-entry:jar:1.5.7:test
   [INFO] |  |  |  |  \- 
org.apache.directory.server:apacheds-core-constants:jar:1.5.7:test
   [INFO] |  |  |  +- org.apache.directory.server:apacheds-utils:jar:1.5.7:test
   [INFO] |  |  |  \- bouncycastle:bcprov-jdk15:jar:140:test
   [INFO] |  |  \- 
org.apache.directory.server:apacheds-kerberos-shared:jar:1.5.7:test
   [INFO] |  | +- 
org.apache.directory.server:apacheds-core-jndi:jar:1.5.7:test
   [INFO] |  | \- 
org.apache.directory.server:apacheds-protocol-shared:jar:1.5.7:test
   [INFO] |  +- org.apache.directory.server:apacheds-core-integ:jar:1.5.7:test
   [INFO] |  +- ldapsdk:ldapsdk:jar:4.1:test
   [INFO] |  +- org.apache.directory.client.ldap:ldap-client-api:jar:0.1:test
   ```





Issue Time Tracking
---

Worklog Id: (was: 836389)
Time Spent: 50m  (was: 40m)

> Upgrade Apache Directory Server to 1.5.7 for release 3.2.
> -
>
> Key: HIVE-26880
> URL: https://issues.apache.org/jira/browse/HIVE-26880
> Project: Hive
>  Issue Type: Improvement
>  Components: Test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>  Labels: hive-3.2.0-must, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> branch-3 uses Apache Directory Server in some tests. It currently uses 
> version 1.5.6. This version has a transitive dependency to a SNAPSHOT, making 
> it awkward to build and release. We can upgrade to 1.5.7 to remove the 
> SNAPSHOT dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26880) Upgrade Apache Directory Server to 1.5.7 for release 3.2.

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26880?focusedWorklogId=836388&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836388
 ]

ASF GitHub Bot logged work on HIVE-26880:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 11:08
Start Date: 02/Jan/23 11:08
Worklog Time Spent: 10m 
  Work Description: zabetak commented on code in PR #3886:
URL: https://github.com/apache/hive/pull/3886#discussion_r1059967623


##
service/pom.xml:
##
@@ -291,13 +291,6 @@
   test
 
 
-
-  org.apache.directory.client.ldap
-  ldap-client-api
-  ${apache-directory-clientapi.version}
-  test
-
-
 
   org.apache.directory.server
   apacheds-server-integ

Review Comment:
   Can you clarify why we don't want to `exclude ldap-client-api` from here as 
it was done in the original commit in master 
(https://github.com/apache/hive/commit/5581eb8a74a4f33b35b7bf70d9ec4e9a95f3b8a0).
   
   When I run `mvn dependency:tree` in service module I see the 
`org.apache.directory.client.ldap:ldap-client-api` dependency coming 
transitively.
   ```
   [INFO] +- org.apache.directory.server:apacheds-server-integ:jar:1.5.7:test
   [INFO] |  +- 
org.apache.directory.server:apacheds-interceptor-kerberos:jar:1.5.7:test
   [INFO] |  |  +- org.apache.directory.server:apacheds-core:jar:1.5.7:test
   [INFO] |  |  |  +- 
org.apache.directory.server:apacheds-core-api:jar:1.5.7:test
   [INFO] |  |  |  |  +- 
org.apache.directory.server:apacheds-core-entry:jar:1.5.7:test
   [INFO] |  |  |  |  \- 
org.apache.directory.server:apacheds-core-constants:jar:1.5.7:test
   [INFO] |  |  |  +- org.apache.directory.server:apacheds-utils:jar:1.5.7:test
   [INFO] |  |  |  \- bouncycastle:bcprov-jdk15:jar:140:test
   [INFO] |  |  \- 
org.apache.directory.server:apacheds-kerberos-shared:jar:1.5.7:test
   [INFO] |  | +- 
org.apache.directory.server:apacheds-core-jndi:jar:1.5.7:test
   [INFO] |  | \- 
org.apache.directory.server:apacheds-protocol-shared:jar:1.5.7:test
   [INFO] |  +- org.apache.directory.server:apacheds-core-integ:jar:1.5.7:test
   [INFO] |  +- ldapsdk:ldapsdk:jar:4.1:test
   [INFO] |  +- org.apache.directory.client.ldap:ldap-client-api:jar:0.1:test
   ```





Issue Time Tracking
---

Worklog Id: (was: 836388)
Time Spent: 40m  (was: 0.5h)

> Upgrade Apache Directory Server to 1.5.7 for release 3.2.
> -
>
> Key: HIVE-26880
> URL: https://issues.apache.org/jira/browse/HIVE-26880
> Project: Hive
>  Issue Type: Improvement
>  Components: Test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>  Labels: hive-3.2.0-must, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> branch-3 uses Apache Directory Server in some tests. It currently uses 
> version 1.5.6. This version has a transitive dependency to a SNAPSHOT, making 
> it awkward to build and release. We can upgrade to 1.5.7 to remove the 
> SNAPSHOT dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26896) Backport of Test fixes for lineage3.q and load_static_ptn_into_bucketed_table.q

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26896:
--
Labels: pull-request-available  (was: )

> Backport of Test fixes for lineage3.q and 
> load_static_ptn_into_bucketed_table.q
> ---
>
> Key: HIVE-26896
> URL: https://issues.apache.org/jira/browse/HIVE-26896
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> These tests were fixed in branch-3.1 so backporting them to branch-3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26896) Backport of Test fixes for lineage3.q and load_static_ptn_into_bucketed_table.q

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26896?focusedWorklogId=836387&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836387
 ]

ASF GitHub Bot logged work on HIVE-26896:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 10:52
Start Date: 02/Jan/23 10:52
Worklog Time Spent: 10m 
  Work Description: amanraj2520 opened a new pull request, #3901:
URL: https://github.com/apache/hive/pull/3901

   These tests were fixed in branch-3.1 as part of Hive 3.1.3 release therefore 
backporting the same in branch-3
   
   
   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   
   
   ### How was this patch tested?
   
   




Issue Time Tracking
---

Worklog Id: (was: 836387)
Remaining Estimate: 0h
Time Spent: 10m

> Backport of Test fixes for lineage3.q and 
> load_static_ptn_into_bucketed_table.q
> ---
>
> Key: HIVE-26896
> URL: https://issues.apache.org/jira/browse/HIVE-26896
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> These tests were fixed in branch-3.1 so backporting them to branch-3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26896) Backport of Test fixes for lineage3.q and load_static_ptn_into_bucketed_table.q

2023-01-02 Thread Aman Raj (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Raj reassigned HIVE-26896:
---


> Backport of Test fixes for lineage3.q and 
> load_static_ptn_into_bucketed_table.q
> ---
>
> Key: HIVE-26896
> URL: https://issues.apache.org/jira/browse/HIVE-26896
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>
> These tests were fixed in branch-3.1 so backporting them to branch-3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26832) Implement SHOW PARTITIONS for Iceberg

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26832?focusedWorklogId=836386&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836386
 ]

ASF GitHub Bot logged work on HIVE-26832:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 10:45
Start Date: 02/Jan/23 10:45
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #3849:
URL: https://github.com/apache/hive/pull/3849#issuecomment-1368829672

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=3849)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3849&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3849&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3849&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3849&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3849&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3849&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3849&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3849&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=3849&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3849&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3849&resolved=false&types=CODE_SMELL)
 [10 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=3849&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3849&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=3849&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 836386)
Time Spent: 1h 10m  (was: 1h)

> Implement SHOW PARTITIONS for Iceberg
> -
>
> Key: HIVE-26832
> URL: https://issues.apache.org/jira/browse/HIVE-26832
> Project: Hive
>  Issue Type: New Feature
>Reporter: Simhadri Govindappa
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Show partition command for iceberg tables should reflect the partition info 
> from the iceberg.partition metadata table based on the default-spec-id .



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26868) Iceberg: Allow IOW on empty table with Partition Evolution

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26868?focusedWorklogId=836385&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836385
 ]

ASF GitHub Bot logged work on HIVE-26868:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 10:43
Start Date: 02/Jan/23 10:43
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on code in PR #3872:
URL: https://github.com/apache/hive/pull/3872#discussion_r1059955353


##
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java:
##
@@ -582,6 +582,11 @@ public void validateSinkDesc(FileSinkDesc sinkDesc) throws 
SemanticException {
 HiveStorageHandler.super.validateSinkDesc(sinkDesc);
 if (sinkDesc.getInsertOverwrite()) {
   Table table = IcebergTableUtil.getTable(conf, 
sinkDesc.getTableInfo().getProperties());
+  if (table.currentSnapshot() != null &&
+  "0" 
.equalsIgnoreCase(table.currentSnapshot().summary().get(SnapshotSummary.TOTAL_RECORDS_PROP)))
 {

Review Comment:
   Changed to compare to Long.
   Regarding moving the check below, I don't think we need to throw in case of 
bucketed tables as well, if we establish that the table is empty. That check 
was added as part of 
[HIVE-25849](https://issues.apache.org/jira/browse/HIVE-25849) in order to 
prevent overwriting wrong data or duplications but in case we establish the 
table is empty, this no longer holds true
   
   https://github.com/apache/hive/pull/2856#issue-1074445457





Issue Time Tracking
---

Worklog Id: (was: 836385)
Time Spent: 50m  (was: 40m)

> Iceberg: Allow IOW on empty table with Partition Evolution
> --
>
> Key: HIVE-26868
> URL: https://issues.apache.org/jira/browse/HIVE-26868
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In case an iceberg table has gone through partition evolution, we don't allow 
> an IOW operation on it.
> But if it is empty, we can allow an IOW since there ain't any data which can 
> get messed by overwrite.
> This helps to compact data, & merge the delete files into data file
> via
> Truncate -> IOW with Snapshot ID before Truncate.
> Same flow is used by Impala for compacting Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26890) Disable TestSSL (Done as part of HIVE-21456 in oss/master)

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26890?focusedWorklogId=836381&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836381
 ]

ASF GitHub Bot logged work on HIVE-26890:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 09:58
Start Date: 02/Jan/23 09:58
Worklog Time Spent: 10m 
  Work Description: amanraj2520 commented on PR #3895:
URL: https://github.com/apache/hive/pull/3895#issuecomment-1368791547

   @abstractdog @zabetak @cnauroth Can you please review this




Issue Time Tracking
---

Worklog Id: (was: 836381)
Time Spent: 0.5h  (was: 20m)

> Disable TestSSL (Done as part of HIVE-21456 in oss/master)
> --
>
> Key: HIVE-26890
> URL: https://issues.apache.org/jira/browse/HIVE-26890
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> TestSSL fails with the following error (this happens in the Hive-3.1.3 
> release also, so disabling this test) :
> {code:java}
> [ERROR] Tests run: 10, Failures: 1, Errors: 0, Skipped: 2, Time elapsed: 
> 23.143 s <<< FAILURE! - in org.apache.hive.jdbc.TestSSL
> [ERROR] testConnectionWrongCertCN(org.apache.hive.jdbc.TestSSL)  Time 
> elapsed: 0.64 s  <<< FAILURE!
> java.lang.AssertionError
>         at org.junit.Assert.fail(Assert.java:86)
>         at org.junit.Assert.assertTrue(Assert.java:41)
>         at org.junit.Assert.assertTrue(Assert.java:52)
>         at 
> org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN(TestSSL.java:408)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>         at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>         at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26868) Iceberg: Allow IOW on empty table with Partition Evolution

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26868?focusedWorklogId=836380&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836380
 ]

ASF GitHub Bot logged work on HIVE-26868:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 09:57
Start Date: 02/Jan/23 09:57
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #3872:
URL: https://github.com/apache/hive/pull/3872#discussion_r1059930271


##
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java:
##
@@ -582,6 +582,11 @@ public void validateSinkDesc(FileSinkDesc sinkDesc) throws 
SemanticException {
 HiveStorageHandler.super.validateSinkDesc(sinkDesc);
 if (sinkDesc.getInsertOverwrite()) {
   Table table = IcebergTableUtil.getTable(conf, 
sinkDesc.getTableInfo().getProperties());
+  if (table.currentSnapshot() != null &&
+  "0" 
.equalsIgnoreCase(table.currentSnapshot().summary().get(SnapshotSummary.TOTAL_RECORDS_PROP)))
 {

Review Comment:
   should we move the check to line 594?





Issue Time Tracking
---

Worklog Id: (was: 836380)
Time Spent: 40m  (was: 0.5h)

> Iceberg: Allow IOW on empty table with Partition Evolution
> --
>
> Key: HIVE-26868
> URL: https://issues.apache.org/jira/browse/HIVE-26868
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In case an iceberg table has gone through partition evolution, we don't allow 
> an IOW operation on it.
> But if it is empty, we can allow an IOW since there ain't any data which can 
> get messed by overwrite.
> This helps to compact data, & merge the delete files into data file
> via
> Truncate -> IOW with Snapshot ID before Truncate.
> Same flow is used by Impala for compacting Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26868) Iceberg: Allow IOW on empty table with Partition Evolution

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26868?focusedWorklogId=836379&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836379
 ]

ASF GitHub Bot logged work on HIVE-26868:
-

Author: ASF GitHub Bot
Created on: 02/Jan/23 09:53
Start Date: 02/Jan/23 09:53
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #3872:
URL: https://github.com/apache/hive/pull/3872#discussion_r1059928248


##
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java:
##
@@ -582,6 +582,11 @@ public void validateSinkDesc(FileSinkDesc sinkDesc) throws 
SemanticException {
 HiveStorageHandler.super.validateSinkDesc(sinkDesc);
 if (sinkDesc.getInsertOverwrite()) {
   Table table = IcebergTableUtil.getTable(conf, 
sinkDesc.getTableInfo().getProperties());
+  if (table.currentSnapshot() != null &&
+  "0" 
.equalsIgnoreCase(table.currentSnapshot().summary().get(SnapshotSummary.TOTAL_RECORDS_PROP)))
 {

Review Comment:
   can we cast to long and compare numbers?
   
   
Long.parseLong(table.currentSnapshot().summary().get(SnapshotSummary.TOTAL_RECORDS_PROP))
 == 0
   





Issue Time Tracking
---

Worklog Id: (was: 836379)
Time Spent: 0.5h  (was: 20m)

> Iceberg: Allow IOW on empty table with Partition Evolution
> --
>
> Key: HIVE-26868
> URL: https://issues.apache.org/jira/browse/HIVE-26868
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In case an iceberg table has gone through partition evolution, we don't allow 
> an IOW operation on it.
> But if it is empty, we can allow an IOW since there ain't any data which can 
> get messed by overwrite.
> This helps to compact data, & merge the delete files into data file
> via
> Truncate -> IOW with Snapshot ID before Truncate.
> Same flow is used by Impala for compacting Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)