[jira] [Comment Edited] (HIVE-28042) DigestMD5 token expired or does not exist error while opening a new connection to HMS

2024-01-30 Thread Vikram Ahuja (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812191#comment-17812191
 ] 

Vikram Ahuja edited comment on HIVE-28042 at 1/31/24 7:02 AM:
--

Thanks for the comment, [~zhangbutao] . The tokenStore has zookeeper, DB and 
memory based implementations. We are using Zookeeper based implementation in 
our scenarios.

However this issue is regardless of the TokenStore's implementation(zookeeper, 
DB and memory). The expiry thread that is removing token is also removing 
tokens post renewal date irrespective of tokenStore Implementation.

 


was (Author: vikramahuja_):
The tokenStore has zookeeper, DB and memory based implementations. We are using 
Zookeeper based implementation in our scenarios. However this issue is 
regardless of the implementations as TokenStore's implementation(zookeeper, DB 
and memory). The expiry thread that is removing token is also removing tokens 
post renewal date irrespective of tokenStore Implementation. The issue will 
exist for all implementations of tokenStores.

> DigestMD5 token expired or does not exist error while opening a new 
> connection to HMS
> -
>
> Key: HIVE-28042
> URL: https://issues.apache.org/jira/browse/HIVE-28042
> Project: Hive
>  Issue Type: Bug
>Reporter: Vikram Ahuja
>Assignee: Vikram Ahuja
>Priority: Major
>  Labels: pull-request-available
>
> Hello,
> In our deployment we are facing the following exception in the HMS logs when 
> a HMS connection is opened from the HS2 in cases where a session is open for 
> a long time leading to query failures:
> {code:java}
> 2024-01-24T02:11:21,324 ERROR [TThreadPoolServer WorkerProcess-760394]: 
> transport.TSaslTransport (TSaslTransport.java:open) - SASL negotiation 
> failurejavax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring 
> password    
> at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
>     
> at 
> com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java)
>     
> at 
> org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java)
>     at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java)    
> at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java)
>     
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
>     
> at java.security.AccessController.doPrivileged(Native Method)    
> at javax.security.auth.Subject.doAs(Subject.javA)    
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java)
>     
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java) 
>    
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)   
>  
> at java.lang.Thread.run(Thread.java)Caused by: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or 
> does not exist: HIVE_DELEGATION_TOKEN owner=***, renewer=***, 
> realUser=*, issueDate=1705973286139, maxDate=1706578086139, 
> sequenceNumber=3294063, masterKeyId=7601    
> at 
> org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.getPassword(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.handle(HadoopThriftAuthBridge.java)
>     
> at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
>     ... 15 more {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28050) Disable Incremental non aggregated materialized view rebuild in presence of delete operations

2024-01-30 Thread Krisztian Kasa (Jira)
Krisztian Kasa created HIVE-28050:
-

 Summary: Disable Incremental non aggregated materialized view 
rebuild in presence of delete operations
 Key: HIVE-28050
 URL: https://issues.apache.org/jira/browse/HIVE-28050
 Project: Hive
  Issue Type: Bug
  Components: Materialized views
Reporter: Krisztian Kasa
Assignee: Krisztian Kasa
 Fix For: 4.1.0


To support incremental rebuild of materialized views which definition does not 
have aggregate in presence of delete operations in any of its source tables the 
records of the source tables need to be uniquely identified and joined with the 
records present in the view.

One possibility is to project ROW_IDs of each source table in the view 
definition but the writeId component is changing at delete.

Another way is to project columns of primary keys or unique keys but these 
constraints are not enforced in Hive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28049) Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, HIVE-24074, HIVE-25104 to branch-3

2024-01-30 Thread Sankar Hariappan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-28049:

Description: Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, 
HIVE-24074, HIVE-25104 to branch-3

> Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, HIVE-24074, 
> HIVE-25104 to branch-3
> --
>
> Key: HIVE-28049
> URL: https://issues.apache.org/jira/browse/HIVE-28049
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
> Fix For: 3.2.0
>
>
> Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, HIVE-24074, 
> HIVE-25104 to branch-3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28049) Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, HIVE-24074, HIVE-25104 to branch-3

2024-01-30 Thread Sankar Hariappan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-28049:

Summary: Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, 
HIVE-24074, HIVE-25104 to branch-3  (was: Backport of HIVE-21862, HIVE-20437, 
HIVE-22589, HIVE-22840, HIVE-24074, HIVE-25104)

> Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, HIVE-24074, 
> HIVE-25104 to branch-3
> --
>
> Key: HIVE-28049
> URL: https://issues.apache.org/jira/browse/HIVE-28049
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
> Fix For: 3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-28049) Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, HIVE-24074, HIVE-25104

2024-01-30 Thread Sankar Hariappan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan resolved HIVE-28049.
-
   Fix Version/s: 3.2.0
Target Version/s: 3.2.0
  Resolution: Fixed

> Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, HIVE-24074, 
> HIVE-25104
> --
>
> Key: HIVE-28049
> URL: https://issues.apache.org/jira/browse/HIVE-28049
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
> Fix For: 3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28049) Backport of HIVE-21862, HIVE-20437, HIVE-22589, HIVE-22840, HIVE-24074, HIVE-25104

2024-01-30 Thread Aman Raj (Jira)
Aman Raj created HIVE-28049:
---

 Summary: Backport of HIVE-21862, HIVE-20437, HIVE-22589, 
HIVE-22840, HIVE-24074, HIVE-25104
 Key: HIVE-28049
 URL: https://issues.apache.org/jira/browse/HIVE-28049
 Project: Hive
  Issue Type: Sub-task
Reporter: Aman Raj
Assignee: Aman Raj






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-28048) Hive cannot run ORDER BY queries on Iceberg tables partitioned by decimal columns

2024-01-30 Thread Simhadri Govindappa (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812500#comment-17812500
 ] 

Simhadri Govindappa edited comment on HIVE-28048 at 1/30/24 10:36 PM:
--

This seems to be the same as https://issues.apache.org/jira/browse/HIVE-27938 
I have a PR for this, please help with the review: 
[https://github.com/apache/hive/pull/5048]  
Thanks! 


was (Author: simhadri-g):
This seems to be the same as https://issues.apache.org/jira/browse/HIVE-27938 
I have a PR up for review. [https://github.com/apache/hive/pull/5048] 

> Hive cannot run ORDER BY queries on Iceberg tables partitioned by decimal 
> columns
> -
>
> Key: HIVE-28048
> URL: https://issues.apache.org/jira/browse/HIVE-28048
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltán Borók-Nagy
>Priority: Major
>  Labels: iceberg
>
> Repro:
> {noformat}
> create table test_dec (d decimal(8,4), i int)
> partitioned by spec (d)
> stored by iceberg;
> insert into test_dec values (3.4, 5), (4.5, 6);
> select * from test_dec order by i;
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28048) Hive cannot run ORDER BY queries on Iceberg tables partitioned by decimal columns

2024-01-30 Thread Simhadri Govindappa (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812500#comment-17812500
 ] 

Simhadri Govindappa commented on HIVE-28048:


This seems to be the same as https://issues.apache.org/jira/browse/HIVE-27938 
I have a PR up for review. [https://github.com/apache/hive/pull/5048] 

> Hive cannot run ORDER BY queries on Iceberg tables partitioned by decimal 
> columns
> -
>
> Key: HIVE-28048
> URL: https://issues.apache.org/jira/browse/HIVE-28048
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltán Borók-Nagy
>Priority: Major
>  Labels: iceberg
>
> Repro:
> {noformat}
> create table test_dec (d decimal(8,4), i int)
> partitioned by spec (d)
> stored by iceberg;
> insert into test_dec values (3.4, 5), (4.5, 6);
> select * from test_dec order by i;
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-28048) Hive cannot run ORDER BY queries on Iceberg tables partitioned by decimal columns

2024-01-30 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812496#comment-17812496
 ] 

Ayush Saxena edited comment on HIVE-28048 at 1/30/24 10:18 PM:
---

if that is only decimal, maybe we can do a convert for bigdecimal and handle 
this, that is what comes into my mind and what even works
{noformat}
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
index 0abbd59b9e..7b202c4977 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hive.ql.exec.vector;
 
 import java.io.IOException;
+import java.math.BigDecimal;
 import java.util.Arrays;
 import java.util.LinkedHashMap;
 import java.util.Map;
@@ -36,6 +37,7 @@
 import org.apache.hadoop.hive.ql.io.BucketIdentifier;
 import org.apache.hadoop.hive.ql.io.HiveFileFormatUtils;
 import org.apache.hadoop.hive.ql.io.IOPrepareCache;
+import org.apache.hadoop.hive.ql.metadata.Hive;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
 import org.apache.hadoop.hive.ql.metadata.VirtualColumn;
 import org.apache.hadoop.hive.ql.plan.MapWork;
@@ -585,7 +587,11 @@ public void addPartitionColsToBatch(ColumnVector col, 
Object value, int colIndex
           dv.isNull[0] = true;
           dv.isRepeating = true;
         } else {
-          dv.fill((HiveDecimal) value);
+          if (value instanceof BigDecimal) {
+            dv.fill(HiveDecimal.create((BigDecimal) value));
+          } else {
+            dv.fill((HiveDecimal) value);
+          }
         }
       }
     }
{noformat}
but if it is other than decimal, maybe for partition value, we should do that 
getPrimitiveJavaObject logic by extracting some stuff from 
{{IcebergObjectInspector}} {{primitive}} method

 

cc. [~dkuzmenko] 


was (Author: ayushtkn):
if that is only decimal, maybe we can do a convert for bigdecimal and handle 
this, that is what comes into my mind and what even works
{noformat}
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
index 0abbd59b9e..7b202c4977 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hive.ql.exec.vector;
 
 import java.io.IOException;
+import java.math.BigDecimal;
 import java.util.Arrays;
 import java.util.LinkedHashMap;
 import java.util.Map;
@@ -36,6 +37,7 @@
 import org.apache.hadoop.hive.ql.io.BucketIdentifier;
 import org.apache.hadoop.hive.ql.io.HiveFileFormatUtils;
 import org.apache.hadoop.hive.ql.io.IOPrepareCache;
+import org.apache.hadoop.hive.ql.metadata.Hive;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
 import org.apache.hadoop.hive.ql.metadata.VirtualColumn;
 import org.apache.hadoop.hive.ql.plan.MapWork;
@@ -585,7 +587,11 @@ public void addPartitionColsToBatch(ColumnVector col, 
Object value, int colIndex
           dv.isNull[0] = true;
           dv.isRepeating = true;
         } else {
-          dv.fill((HiveDecimal) value);
+          if (value instanceof BigDecimal) {
+            dv.fill(HiveDecimal.create((BigDecimal) value));
+          } else {
+            dv.fill((HiveDecimal) value);
+          }
         }
       }
     }
{noformat}
but if it is other than decimal, maybe for partition value, we should do that 
getPrimitiveJavaObject logic by extracting some stuff from 
{{IcebergObjectInspector}} {{primitive}} method

> Hive cannot run ORDER BY queries on Iceberg tables partitioned by decimal 
> columns
> -
>
> Key: HIVE-28048
> URL: https://issues.apache.org/jira/browse/HIVE-28048
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltán Borók-Nagy
>Priority: Major
>  Labels: iceberg
>
> Repro:
> {noformat}
> create table test_dec (d decimal(8,4), i int)
> partitioned by spec (d)
> stored by iceberg;
> insert into test_dec values (3.4, 5), (4.5, 6);
> select * from test_dec order by i;
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28048) Hive cannot run ORDER BY queries on Iceberg tables partitioned by decimal columns

2024-01-30 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812496#comment-17812496
 ] 

Ayush Saxena commented on HIVE-28048:
-

if that is only decimal, maybe we can do a convert for bigdecimal and handle 
this, that is what comes into my mind and what even works
{noformat}
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
index 0abbd59b9e..7b202c4977 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hive.ql.exec.vector;
 
 import java.io.IOException;
+import java.math.BigDecimal;
 import java.util.Arrays;
 import java.util.LinkedHashMap;
 import java.util.Map;
@@ -36,6 +37,7 @@
 import org.apache.hadoop.hive.ql.io.BucketIdentifier;
 import org.apache.hadoop.hive.ql.io.HiveFileFormatUtils;
 import org.apache.hadoop.hive.ql.io.IOPrepareCache;
+import org.apache.hadoop.hive.ql.metadata.Hive;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
 import org.apache.hadoop.hive.ql.metadata.VirtualColumn;
 import org.apache.hadoop.hive.ql.plan.MapWork;
@@ -585,7 +587,11 @@ public void addPartitionColsToBatch(ColumnVector col, 
Object value, int colIndex
           dv.isNull[0] = true;
           dv.isRepeating = true;
         } else {
-          dv.fill((HiveDecimal) value);
+          if (value instanceof BigDecimal) {
+            dv.fill(HiveDecimal.create((BigDecimal) value));
+          } else {
+            dv.fill((HiveDecimal) value);
+          }
         }
       }
     }
{noformat}
but if it is other than decimal, maybe for partition value, we should do that 
getPrimitiveJavaObject logic by extracting some stuff from 
{{IcebergObjectInspector}} {{primitive}} method

> Hive cannot run ORDER BY queries on Iceberg tables partitioned by decimal 
> columns
> -
>
> Key: HIVE-28048
> URL: https://issues.apache.org/jira/browse/HIVE-28048
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltán Borók-Nagy
>Priority: Major
>  Labels: iceberg
>
> Repro:
> {noformat}
> create table test_dec (d decimal(8,4), i int)
> partitioned by spec (d)
> stored by iceberg;
> insert into test_dec values (3.4, 5), (4.5, 6);
> select * from test_dec order by i;
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28048) Hive cannot run ORDER BY queries on Iceberg tables partitioned by decimal columns

2024-01-30 Thread Jira
Zoltán Borók-Nagy created HIVE-28048:


 Summary: Hive cannot run ORDER BY queries on Iceberg tables 
partitioned by decimal columns
 Key: HIVE-28048
 URL: https://issues.apache.org/jira/browse/HIVE-28048
 Project: Hive
  Issue Type: Bug
Reporter: Zoltán Borók-Nagy


Repro:

{noformat}
create table test_dec (d decimal(8,4), i int)
partitioned by spec (d)
stored by iceberg;

insert into test_dec values (3.4, 5), (4.5, 6);

select * from test_dec order by i;
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28042) DigestMD5 token expired or does not exist error while opening a new connection to HMS

2024-01-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-28042:
--
Labels: pull-request-available  (was: )

> DigestMD5 token expired or does not exist error while opening a new 
> connection to HMS
> -
>
> Key: HIVE-28042
> URL: https://issues.apache.org/jira/browse/HIVE-28042
> Project: Hive
>  Issue Type: Bug
>Reporter: Vikram Ahuja
>Assignee: Vikram Ahuja
>Priority: Major
>  Labels: pull-request-available
>
> Hello,
> In our deployment we are facing the following exception in the HMS logs when 
> a HMS connection is opened from the HS2 in cases where a session is open for 
> a long time leading to query failures:
> {code:java}
> 2024-01-24T02:11:21,324 ERROR [TThreadPoolServer WorkerProcess-760394]: 
> transport.TSaslTransport (TSaslTransport.java:open) - SASL negotiation 
> failurejavax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring 
> password    
> at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
>     
> at 
> com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java)
>     
> at 
> org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java)
>     at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java)    
> at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java)
>     
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
>     
> at java.security.AccessController.doPrivileged(Native Method)    
> at javax.security.auth.Subject.doAs(Subject.javA)    
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java)
>     
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java) 
>    
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)   
>  
> at java.lang.Thread.run(Thread.java)Caused by: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or 
> does not exist: HIVE_DELEGATION_TOKEN owner=***, renewer=***, 
> realUser=*, issueDate=1705973286139, maxDate=1706578086139, 
> sequenceNumber=3294063, masterKeyId=7601    
> at 
> org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.getPassword(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.handle(HadoopThriftAuthBridge.java)
>     
> at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
>     ... 15 more {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28047) Iceberg: Major QB Compaction with a single commit

2024-01-30 Thread Dmitriy Fingerman (Jira)
Dmitriy Fingerman created HIVE-28047:


 Summary: Iceberg: Major QB Compaction with a single commit
 Key: HIVE-28047
 URL: https://issues.apache.org/jira/browse/HIVE-28047
 Project: Hive
  Issue Type: Improvement
Reporter: Dmitriy Fingerman
Assignee: Dmitriy Fingerman


Currently Hive Iceberg Major QB Compaction happens in 2 commits in a single 
transaction. First commit deletes all existing files and second commit inserts 
compacted data files. If a user queries the table using snapshot id of the 
first commit after the compaction, there will be no records returned which is 
invalid. Need to find a way to do it in one commit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28046) reuse SERDE constants in hive-exec module

2024-01-30 Thread Michal Lorek (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michal Lorek updated HIVE-28046:

Fix Version/s: 4.0.0

> reuse SERDE constants in hive-exec module
> -
>
> Key: HIVE-28046
> URL: https://issues.apache.org/jira/browse/HIVE-28046
> Project: Hive
>  Issue Type: Improvement
>Reporter: Michal Lorek
>Assignee: Michal Lorek
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> Reuse SERDE constants in hive-exec module instead of using common string 
> literals.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28046) reuse SERDE constants in hive-exec module

2024-01-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-28046:
--
Labels: pull-request-available  (was: )

> reuse SERDE constants in hive-exec module
> -
>
> Key: HIVE-28046
> URL: https://issues.apache.org/jira/browse/HIVE-28046
> Project: Hive
>  Issue Type: Improvement
>Reporter: Michal Lorek
>Assignee: Michal Lorek
>Priority: Minor
>  Labels: pull-request-available
>
> Reuse SERDE constants in hive-exec module instead of using common string 
> literals.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28046) reuse SERDE constants in hive-exec module

2024-01-30 Thread Michal Lorek (Jira)
Michal Lorek created HIVE-28046:
---

 Summary: reuse SERDE constants in hive-exec module
 Key: HIVE-28046
 URL: https://issues.apache.org/jira/browse/HIVE-28046
 Project: Hive
  Issue Type: Improvement
Reporter: Michal Lorek
Assignee: Michal Lorek


Reuse SERDE constants in hive-exec module instead of using common string 
literals.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-28042) DigestMD5 token expired or does not exist error while opening a new connection to HMS

2024-01-30 Thread Vikram Ahuja (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812151#comment-17812151
 ] 

Vikram Ahuja edited comment on HIVE-28042 at 1/30/24 8:56 AM:
--

*Another instance of this issue:*

 
{code:java}
2024-01-24T02:11:21,324 ERROR [TThreadPoolServer WorkerProcess-760394]: 
transport.TSaslTransport (TSaslTransport.java:open) 
- SASL negotiation failurejavax.security.sasl.SaslException: DIGEST-MD5: IO 
error acquiring password
at 
com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
    
at 
com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java)
    
at 
org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java)
    
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java)    
at 
org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java)
    
at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java)
    
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
    
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
    
at java.security.AccessController.doPrivileged(Native Method)    
at javax.security.auth.Subject.doAs(Subject.javA)    
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java) 
   
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java)
    
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java)
    
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java)   
 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)  
  
at java.lang.Thread.run(Thread.java)Caused by: 
org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or 
does not exist: HIVE_DELEGATION_TOKEN owner=***, renewer=***, 
realUser=*, issueDate=1705973286139, maxDate=1706578086139, 
sequenceNumber=3294063, masterKeyId=7601    
at 
org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
    
at 
org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
    
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.getPassword(HadoopThriftAuthBridge.java)
    
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.handle(HadoopThriftAuthBridge.java)
    at 
com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
    ... 15 more {code}
 

 

*Analysis of the issue:*

This particular issue is only happening when the HS2 tries to open a new Digest 
MD5 based Thrift TSaslClientTransport in cases where the session is open for a 
long time. In such cases whenever a new transport is opened it actually tries 
to authenticate and uses a retrieve password call with the token that it is 
storing in the tokenStore, The tokenStore has zookeeper, DB and memory based 
implementation. However this issue is regardless of the implementations.

HS2 uses the same metaStoreClient object across all the connections that is 
embedded in Hive.java but in some cases we have observed that is recreating a 
new metaStoreClient with a fresh connection(TSaslClientTransport). Two use 
cases that I discovered which were leading to these issues were:
 # 
 ## MSCK repair
 ## RetryingMetaStoreClient in case of any HMS issues(applicable to any sql 
query which interacts with the HMS)

 

*Root cause of this issue:*

There is a background thread called ExpiredTokenRemover running in HMS (class: 
TokenStoreDelegationTokenSecretManager.java ). This expiry thread itself is 
removing the token from the tokenStore after the renewal time has passed and 
also removing it after expiry time, but is should only remove it post expiry 
time as the token can be renewed till then.

 

Will be raising a fix for the same by changing the code where token is deleted 
after renewal time itself has passed.


was (Author: vikramahuja_):
*Another instance of this issue:*

 
{code:java}
2024-01-24T02:11:21,324 ERROR [TThreadPoolServer WorkerProcess-760394]: 
transport.TSaslTransport (TSaslTransport.java:open) 
- SASL negotiation failurejavax.security.sasl.SaslException: DIGEST-MD5: IO 
error acquiring password
at 

[jira] [Resolved] (HIVE-28017) Add generated protobuf code

2024-01-30 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HIVE-28017.
-
Fix Version/s: 4.1.0
   Resolution: Fixed

> Add generated protobuf code
> ---
>
> Key: HIVE-28017
> URL: https://issues.apache.org/jira/browse/HIVE-28017
> Project: Hive
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.1.0
>
>
> HIVE-26790 upgraded protobuf, but didn't generate the code wrt the newer 
> version



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-28042) DigestMD5 token expired or does not exist error while opening a new connection to HMS

2024-01-30 Thread Vikram Ahuja (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812191#comment-17812191
 ] 

Vikram Ahuja edited comment on HIVE-28042 at 1/30/24 8:34 AM:
--

The tokenStore has zookeeper, DB and memory based implementations. We are using 
Zookeeper based implementation in our scenarios. However this issue is 
regardless of the implementations as TokenStore's implementation(zookeeper, DB 
and memory). The expiry thread that is removing token is also removing tokens 
post renewal date irrespective of tokenStore Implementation. The issue will 
exist for all implementations of tokenStores.


was (Author: vikramahuja_):
The tokenStore has zookeeper, DB and memory based implementations. We are using 
Zookeeper based implementation in our scenarios. However this issue is 
regardless of the implementations as TokenStore's implementation(zookeeper, DB 
and memory). The expiry thread that is removing token is also removing tokens 
post renewal date irrespective of tokenStore Implementation. The issue will 
exist will all implementations of tokenStores.

> DigestMD5 token expired or does not exist error while opening a new 
> connection to HMS
> -
>
> Key: HIVE-28042
> URL: https://issues.apache.org/jira/browse/HIVE-28042
> Project: Hive
>  Issue Type: Bug
>Reporter: Vikram Ahuja
>Assignee: Vikram Ahuja
>Priority: Major
>
> Hello,
> In our deployment we are facing the following exception in the HMS logs when 
> a HMS connection is opened from the HS2 in cases where a session is open for 
> a long time leading to query failures:
> {code:java}
> 2024-01-24T02:11:21,324 ERROR [TThreadPoolServer WorkerProcess-760394]: 
> transport.TSaslTransport (TSaslTransport.java:open) - SASL negotiation 
> failurejavax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring 
> password    
> at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
>     
> at 
> com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java)
>     
> at 
> org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java)
>     at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java)    
> at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java)
>     
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
>     
> at java.security.AccessController.doPrivileged(Native Method)    
> at javax.security.auth.Subject.doAs(Subject.javA)    
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java)
>     
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java) 
>    
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)   
>  
> at java.lang.Thread.run(Thread.java)Caused by: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or 
> does not exist: HIVE_DELEGATION_TOKEN owner=***, renewer=***, 
> realUser=*, issueDate=1705973286139, maxDate=1706578086139, 
> sequenceNumber=3294063, masterKeyId=7601    
> at 
> org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.getPassword(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.handle(HadoopThriftAuthBridge.java)
>     
> at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
>     ... 15 more {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-28042) DigestMD5 token expired or does not exist error while opening a new connection to HMS

2024-01-30 Thread Vikram Ahuja (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812151#comment-17812151
 ] 

Vikram Ahuja edited comment on HIVE-28042 at 1/30/24 8:33 AM:
--

*Another instance of this issue:*

 
{code:java}
2024-01-24T02:11:21,324 ERROR [TThreadPoolServer WorkerProcess-760394]: 
transport.TSaslTransport (TSaslTransport.java:open) 
- SASL negotiation failurejavax.security.sasl.SaslException: DIGEST-MD5: IO 
error acquiring password
at 
com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
    
at 
com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java)
    
at 
org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java)
    
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java)    
at 
org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java)
    
at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java)
    
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
    
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
    
at java.security.AccessController.doPrivileged(Native Method)    
at javax.security.auth.Subject.doAs(Subject.javA)    
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java) 
   
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java)
    
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java)
    
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java)   
 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)  
  
at java.lang.Thread.run(Thread.java)Caused by: 
org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or 
does not exist: HIVE_DELEGATION_TOKEN owner=***, renewer=***, 
realUser=*, issueDate=1705973286139, maxDate=1706578086139, 
sequenceNumber=3294063, masterKeyId=7601    
at 
org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
    
at 
org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
    
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.getPassword(HadoopThriftAuthBridge.java)
    
at 
org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.handle(HadoopThriftAuthBridge.java)
    at 
com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
    ... 15 more {code}
 

 

*Analysis of the issue:*

This particular issue is only happening when the HS2 tries to open a new Digest 
MD5 based Thrift TSaslClientTransport in cases where the session is open for a 
long time. In such cases whenever a new transport is opened it actually tries 
to do a retrieve password call with the token that it is storing in the 
tokenStore, The tokenStore has zookeeper, DB and memory based implementation. 
However this issue is regardless of the implementations.

HS2 uses the same metaStoreClient object across all the connections that is 
embedded in Hive.java but in some cases we have observed that is recreating a 
new metaStoreClient with a fresh connection(TSaslClientTransport). Two use 
cases that I discovered which were leading to these issues were:
 # 
 ## MSCK repair
 ## RetryingMetaStoreClient in case of any HMS issues(applicable to any sql 
query which interacts with the HMS)

 

*Root cause of this issue:*

There is a background thread called ExpiredTokenRemover running in HMS (class: 
TokenStoreDelegationTokenSecretManager.java ). This expiry thread itself is 
removing the token from the tokenStore after the renewal time has passed and 
also removing it after expiry time, but is should only remove it post expiry 
time as the token can be renewed till then.

 

Will be raising a fix for the same by changing the code where token is deleted 
after renewal time itself has passed.


was (Author: vikramahuja_):
*Another instance of this issue:*

 
{code:java}
2024-01-24T02:11:21,324 ERROR [TThreadPoolServer WorkerProcess-760394]: 
transport.TSaslTransport (TSaslTransport.java:open) 
- SASL negotiation failurejavax.security.sasl.SaslException: DIGEST-MD5: IO 
error acquiring password
at 
com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
    
at 

[jira] [Commented] (HIVE-28042) DigestMD5 token expired or does not exist error while opening a new connection to HMS

2024-01-30 Thread Vikram Ahuja (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812191#comment-17812191
 ] 

Vikram Ahuja commented on HIVE-28042:
-

The tokenStore has zookeeper, DB and memory based implementations. We are using 
Zookeeper based implementation in our scenarios. However this issue is 
regardless of the implementations as TokenStore's implementation(zookeeper, DB 
and memory). The expiry thread that is removing token is also removing tokens 
post renewal date irrespective of tokenStore Implementation. The issue will 
exist will all implementations of tokenStores.

> DigestMD5 token expired or does not exist error while opening a new 
> connection to HMS
> -
>
> Key: HIVE-28042
> URL: https://issues.apache.org/jira/browse/HIVE-28042
> Project: Hive
>  Issue Type: Bug
>Reporter: Vikram Ahuja
>Assignee: Vikram Ahuja
>Priority: Major
>
> Hello,
> In our deployment we are facing the following exception in the HMS logs when 
> a HMS connection is opened from the HS2 in cases where a session is open for 
> a long time leading to query failures:
> {code:java}
> 2024-01-24T02:11:21,324 ERROR [TThreadPoolServer WorkerProcess-760394]: 
> transport.TSaslTransport (TSaslTransport.java:open) - SASL negotiation 
> failurejavax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring 
> password    
> at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
>     
> at 
> com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java)
>     
> at 
> org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java)
>     at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java)    
> at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java)
>     
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java)
>     
> at java.security.AccessController.doPrivileged(Native Method)    
> at javax.security.auth.Subject.doAs(Subject.javA)    
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java)
>     
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java) 
>    
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)   
>  
> at java.lang.Thread.run(Thread.java)Caused by: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or 
> does not exist: HIVE_DELEGATION_TOKEN owner=***, renewer=***, 
> realUser=*, issueDate=1705973286139, maxDate=1706578086139, 
> sequenceNumber=3294063, masterKeyId=7601    
> at 
> org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.getPassword(HadoopThriftAuthBridge.java)
>     
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.handle(HadoopThriftAuthBridge.java)
>     
> at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java)
>     ... 15 more {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)