[jira] [Updated] (SPARK-47934) Inefficient Redirect Handling Due to Missing Trailing Slashes in URL Redirection

2024-04-22 Thread huangzhir (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-47934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhir updated SPARK-47934:
--
Attachment: image-2024-04-22-15-14-13-468.png

> Inefficient Redirect Handling Due to Missing Trailing Slashes in URL 
> Redirection
> 
>
> Key: SPARK-47934
> URL: https://issues.apache.org/jira/browse/SPARK-47934
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.2.4, 3.3.2, 3.5.1, 3.4.3
>Reporter: huangzhir
>Priority: Trivial
> Attachments: image-2024-04-22-15-14-13-468.png
>
>
> *{*}Summary:{*}*
> The current implementation of URL redirection in Spark's history web UI does 
> not consistently add trailing slashes to URLs when constructing redirection 
> targets. This inconsistency leads to additional HTTP redirects by Jetty, 
> which increases the load time and reduces the efficiency of the Spark UI.
> *{*}Problem Description:{*}*
> When constructing redirect URLs, particularly in scenarios where an attempt 
> ID needs to be appended, the system does not ensure that the base URL ends 
> with a slash. This omission results in the generated URL being redirected by 
> Jetty to add a trailing slash, thus causing an unnecessary additional HTTP 
> redirect.
> For example, when the `shouldAppendAttemptId` flag is true, the URL is formed 
> without a trailing slash before the attempt ID is appended, leading to two 
> redirects: one by our logic to add the attempt ID, and another by Jetty to 
> correct the missing slash. 
> !image-2024-04-22-15-06-29-357.png!
> *{*}Proposed Solution:{*}*
> [https://github.com/apache/spark/blob/2d0b56c3eac611e743c41d16ea8e439bc8a504e4/core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala#L118]
> Ensure that all redirect URLs uniformly end with a trailing slash regardless 
> of whether an attempt ID is appended. This can be achieved by modifying the 
> URL construction logic as follows:
> ```scala
> val redirect = if (shouldAppendAttemptId) {
> req.getRequestURI.stripSuffix("/") + "/" + attemptId.get + "/"
> } else {
> req.getRequestURI.stripSuffix("/") + "/"
> }
> ```
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-47934) Inefficient Redirect Handling Due to Missing Trailing Slashes in URL Redirection

2024-04-22 Thread huangzhir (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-47934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhir updated SPARK-47934:
--
Description: 
*Summary:*
The current implementation of URL redirection in Spark's history web UI does 
not consistently add trailing slashes to URLs when constructing redirection 
targets. This inconsistency leads to additional HTTP redirects by Jetty, which 
increases the load time and reduces the efficiency of the Spark UI.

*Problem Description:*
When constructing redirect URLs, particularly in scenarios where an attempt ID 
needs to be appended, the system does not ensure that the base URL ends with a 
slash. This omission results in the generated URL being redirected by Jetty to 
add a trailing slash, thus causing an unnecessary additional HTTP redirect.

For example, when the `shouldAppendAttemptId` flag is true, the URL is formed 
without a trailing slash before the attempt ID is appended, leading to two 
redirects: one by our logic to add the attempt ID, and another by Jetty to 
correct the missing slash. 

!image-2024-04-22-15-14-13-468.png!

*Proposed Solution:*

[https://github.com/apache/spark/blob/2d0b56c3eac611e743c41d16ea8e439bc8a504e4/core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala#L118]

Ensure that all redirect URLs uniformly end with a trailing slash regardless of 
whether an attempt ID is appended. This can be achieved by modifying the URL 
construction logic as follows:

```scala
val redirect = if (shouldAppendAttemptId)

{ req.getRequestURI.stripSuffix("/") + "/" + attemptId.get + "/" }

else

{ req.getRequestURI.stripSuffix("/") + "/" }

```
 

  was:
{*}{{*}}Summary:{{*}}{*}
The current implementation of URL redirection in Spark's history web UI does 
not consistently add trailing slashes to URLs when constructing redirection 
targets. This inconsistency leads to additional HTTP redirects by Jetty, which 
increases the load time and reduces the efficiency of the Spark UI.

{*}{{*}}Problem Description:{{*}}{*}
When constructing redirect URLs, particularly in scenarios where an attempt ID 
needs to be appended, the system does not ensure that the base URL ends with a 
slash. This omission results in the generated URL being redirected by Jetty to 
add a trailing slash, thus causing an unnecessary additional HTTP redirect.

For example, when the `shouldAppendAttemptId` flag is true, the URL is formed 
without a trailing slash before the attempt ID is appended, leading to two 
redirects: one by our logic to add the attempt ID, and another by Jetty to 
correct the missing slash. 

!image-2024-04-22-15-14-13-468.png!

{*}{{*}}Proposed Solution:{{*}}{*}

[https://github.com/apache/spark/blob/2d0b56c3eac611e743c41d16ea8e439bc8a504e4/core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala#L118]

Ensure that all redirect URLs uniformly end with a trailing slash regardless of 
whether an attempt ID is appended. This can be achieved by modifying the URL 
construction logic as follows:

```scala
val redirect = if (shouldAppendAttemptId)

{ req.getRequestURI.stripSuffix("/") + "/" + attemptId.get + "/" }

else

{ req.getRequestURI.stripSuffix("/") + "/" }

```
 


> Inefficient Redirect Handling Due to Missing Trailing Slashes in URL 
> Redirection
> 
>
> Key: SPARK-47934
> URL: https://issues.apache.org/jira/browse/SPARK-47934
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.2.4, 3.3.2, 3.5.1, 3.4.3
>Reporter: huangzhir
>Priority: Trivial
> Attachments: image-2024-04-22-15-14-13-468.png
>
>
> *Summary:*
> The current implementation of URL redirection in Spark's history web UI does 
> not consistently add trailing slashes to URLs when constructing redirection 
> targets. This inconsistency leads to additional HTTP redirects by Jetty, 
> which increases the load time and reduces the efficiency of the Spark UI.
> *Problem Description:*
> When constructing redirect URLs, particularly in scenarios where an attempt 
> ID needs to be appended, the system does not ensure that the base URL ends 
> with a slash. This omission results in the generated URL being redirected by 
> Jetty to add a trailing slash, thus causing an unnecessary additional HTTP 
> redirect.
> For example, when the `shouldAppendAttemptId` flag is true, the URL is formed 
> without a trailing slash before the attempt ID is appended, leading to two 
> redirects: one by our logic to add the attempt ID, and another by Jetty to 
> correct the missing slash. 
> !image-2024-04-22-15-14-13-468.png!
> *Proposed Solution:*
> [https://github.com/apache/spark/blob/2d0b56c3eac611e743c41d16ea8e439bc8a504e4/core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala#L118]
> Ensure that all 

[jira] [Updated] (SPARK-47934) Inefficient Redirect Handling Due to Missing Trailing Slashes in URL Redirection

2024-04-22 Thread huangzhir (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-47934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhir updated SPARK-47934:
--
Description: 
{*}{{*}}Summary:{{*}}{*}
The current implementation of URL redirection in Spark's history web UI does 
not consistently add trailing slashes to URLs when constructing redirection 
targets. This inconsistency leads to additional HTTP redirects by Jetty, which 
increases the load time and reduces the efficiency of the Spark UI.

{*}{{*}}Problem Description:{{*}}{*}
When constructing redirect URLs, particularly in scenarios where an attempt ID 
needs to be appended, the system does not ensure that the base URL ends with a 
slash. This omission results in the generated URL being redirected by Jetty to 
add a trailing slash, thus causing an unnecessary additional HTTP redirect.

For example, when the `shouldAppendAttemptId` flag is true, the URL is formed 
without a trailing slash before the attempt ID is appended, leading to two 
redirects: one by our logic to add the attempt ID, and another by Jetty to 
correct the missing slash. 

!image-2024-04-22-15-14-13-468.png!

{*}{{*}}Proposed Solution:{{*}}{*}

[https://github.com/apache/spark/blob/2d0b56c3eac611e743c41d16ea8e439bc8a504e4/core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala#L118]

Ensure that all redirect URLs uniformly end with a trailing slash regardless of 
whether an attempt ID is appended. This can be achieved by modifying the URL 
construction logic as follows:

```scala
val redirect = if (shouldAppendAttemptId)

{ req.getRequestURI.stripSuffix("/") + "/" + attemptId.get + "/" }

else

{ req.getRequestURI.stripSuffix("/") + "/" }

```
 

  was:
*{*}Summary:{*}*
The current implementation of URL redirection in Spark's history web UI does 
not consistently add trailing slashes to URLs when constructing redirection 
targets. This inconsistency leads to additional HTTP redirects by Jetty, which 
increases the load time and reduces the efficiency of the Spark UI.

*{*}Problem Description:{*}*
When constructing redirect URLs, particularly in scenarios where an attempt ID 
needs to be appended, the system does not ensure that the base URL ends with a 
slash. This omission results in the generated URL being redirected by Jetty to 
add a trailing slash, thus causing an unnecessary additional HTTP redirect.

For example, when the `shouldAppendAttemptId` flag is true, the URL is formed 
without a trailing slash before the attempt ID is appended, leading to two 
redirects: one by our logic to add the attempt ID, and another by Jetty to 
correct the missing slash. 

!image-2024-04-22-15-06-29-357.png!

*{*}Proposed Solution:{*}*

[https://github.com/apache/spark/blob/2d0b56c3eac611e743c41d16ea8e439bc8a504e4/core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala#L118]

Ensure that all redirect URLs uniformly end with a trailing slash regardless of 
whether an attempt ID is appended. This can be achieved by modifying the URL 
construction logic as follows:

```scala
val redirect = if (shouldAppendAttemptId) {
req.getRequestURI.stripSuffix("/") + "/" + attemptId.get + "/"
} else {
req.getRequestURI.stripSuffix("/") + "/"
}

```
 


> Inefficient Redirect Handling Due to Missing Trailing Slashes in URL 
> Redirection
> 
>
> Key: SPARK-47934
> URL: https://issues.apache.org/jira/browse/SPARK-47934
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.2.4, 3.3.2, 3.5.1, 3.4.3
>Reporter: huangzhir
>Priority: Trivial
> Attachments: image-2024-04-22-15-14-13-468.png
>
>
> {*}{{*}}Summary:{{*}}{*}
> The current implementation of URL redirection in Spark's history web UI does 
> not consistently add trailing slashes to URLs when constructing redirection 
> targets. This inconsistency leads to additional HTTP redirects by Jetty, 
> which increases the load time and reduces the efficiency of the Spark UI.
> {*}{{*}}Problem Description:{{*}}{*}
> When constructing redirect URLs, particularly in scenarios where an attempt 
> ID needs to be appended, the system does not ensure that the base URL ends 
> with a slash. This omission results in the generated URL being redirected by 
> Jetty to add a trailing slash, thus causing an unnecessary additional HTTP 
> redirect.
> For example, when the `shouldAppendAttemptId` flag is true, the URL is formed 
> without a trailing slash before the attempt ID is appended, leading to two 
> redirects: one by our logic to add the attempt ID, and another by Jetty to 
> correct the missing slash. 
> !image-2024-04-22-15-14-13-468.png!
> {*}{{*}}Proposed Solution:{{*}}{*}
> 

[jira] [Created] (SPARK-47934) Inefficient Redirect Handling Due to Missing Trailing Slashes in URL Redirection

2024-04-22 Thread huangzhir (Jira)
huangzhir created SPARK-47934:
-

 Summary: Inefficient Redirect Handling Due to Missing Trailing 
Slashes in URL Redirection
 Key: SPARK-47934
 URL: https://issues.apache.org/jira/browse/SPARK-47934
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.4.3, 3.5.1, 3.3.2, 3.2.4
Reporter: huangzhir


*{*}Summary:{*}*
The current implementation of URL redirection in Spark's history web UI does 
not consistently add trailing slashes to URLs when constructing redirection 
targets. This inconsistency leads to additional HTTP redirects by Jetty, which 
increases the load time and reduces the efficiency of the Spark UI.

*{*}Problem Description:{*}*
When constructing redirect URLs, particularly in scenarios where an attempt ID 
needs to be appended, the system does not ensure that the base URL ends with a 
slash. This omission results in the generated URL being redirected by Jetty to 
add a trailing slash, thus causing an unnecessary additional HTTP redirect.

For example, when the `shouldAppendAttemptId` flag is true, the URL is formed 
without a trailing slash before the attempt ID is appended, leading to two 
redirects: one by our logic to add the attempt ID, and another by Jetty to 
correct the missing slash. 

!image-2024-04-22-15-06-29-357.png!

*{*}Proposed Solution:{*}*

[https://github.com/apache/spark/blob/2d0b56c3eac611e743c41d16ea8e439bc8a504e4/core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala#L118]

Ensure that all redirect URLs uniformly end with a trailing slash regardless of 
whether an attempt ID is appended. This can be achieved by modifying the URL 
construction logic as follows:

```scala
val redirect = if (shouldAppendAttemptId) {
req.getRequestURI.stripSuffix("/") + "/" + attemptId.get + "/"
} else {
req.getRequestURI.stripSuffix("/") + "/"
}

```
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37787) Long running Spark Job(Spark ThriftServer) throw HDFS_DELEGATE_TOKEN not found in cache Exception

2022-04-16 Thread huangzhir (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17523265#comment-17523265
 ] 

huangzhir commented on SPARK-37787:
---

Another jira is open for similar issue.

https://issues.apache.org/jira/browse/SPARK-26385

> Long running Spark Job(Spark ThriftServer) throw HDFS_DELEGATE_TOKEN not 
> found in cache Exception
> -
>
> Key: SPARK-37787
> URL: https://issues.apache.org/jira/browse/SPARK-37787
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.0, 3.1.0, 3.2.0
> Environment: spark3 thrift server
>  
> spark-default.conf
> spark.hadoop.fs.hdfs.impl.disable.cache=true
>  
>Reporter: huangzhir
>Priority: Major
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> *HDFS_DELEGATE_TOKEN not found in cache exception* occurs when accessing 
> spark thriftserver service. The specific exception is as follows:
> [Exception Log | 
> https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png]
> !https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png!
> the HadoopDelegationTokenManager thow Exception when renewal 
> DelegationToken,as follows:
>  
> We are also find HadoopDelegationTokenManager log as follows:
> INFO [Credential Renewal Thread] 
> org.apache.spark.deploy.security.HadoopDelegationTokenManager logInfo - 
> *Scheduling renewal in 1921535501304.2 h.*
> [hdfs Exceptin log in HadoopDelegationTokenManager  
> |https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png]
> !https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-37787) Long running Spark Job(Spark ThriftServer) throw HDFS_DELEGATE_TOKEN not found in cache Exception

2022-01-03 Thread huangzhir (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17468329#comment-17468329
 ] 

huangzhir edited comment on SPARK-37787 at 1/4/22, 2:54 AM:


This is because
HadoopFSDelegationTokenProvider.obtaindelegationtokens returns None when an 
exception is encountered. HadoopDelegationTokenProvider that cannot be renewed 
also returns None. If the obtaindelegationtokens of all 
HadoopDelegationTokenProvider return None,HadoopDelegationTokenManager In 
obtaindelegationtokens, call foldleft (long. Maxvalue) (math. Min) to return 
long.MaxValue. Therefore, the log shows "scheduling renewal in 1921535501304.2 
H"
At this time, HadoopFSDelegationToken cannot be renewed in time, resulting in 
throw "hdfs_delete_token not found in cache" Exception
 

 
 


was (Author: JIRAUSER282843):
This is because
HadoopFSDelegationTokenProvider.obtaindelegationtokens returns None when an 
exception is encountered. HadoopDelegationTokenProvider that cannot be renewed 
also returns None. If the obtaindelegationtokens of all 
HadoopDelegationTokenProvider return None,HadoopDelegationTokenManager In 
obtaindelegationtokens, call foldleft (long. Maxvalue) (math. Min) to return 
long.MaxValue. Therefore, the log shows "scheduling renewal in 1921535501304.2 
H"
 
 

> Long running Spark Job(Spark ThriftServer) throw HDFS_DELEGATE_TOKEN not 
> found in cache Exception
> -
>
> Key: SPARK-37787
> URL: https://issues.apache.org/jira/browse/SPARK-37787
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.0, 3.1.0, 3.2.0
> Environment: spark3 thrift server
>  
> spark-default.conf
> spark.hadoop.fs.hdfs.impl.disable.cache=true
>  
>Reporter: huangzhir
>Priority: Major
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> *HDFS_DELEGATE_TOKEN not found in cache exception* occurs when accessing 
> spark thriftserver service. The specific exception is as follows:
> [Exception Log | 
> https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png]
> !https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png!
> the HadoopDelegationTokenManager thow Exception when renewal 
> DelegationToken,as follows:
>  
> We are also find HadoopDelegationTokenManager log as follows:
> INFO [Credential Renewal Thread] 
> org.apache.spark.deploy.security.HadoopDelegationTokenManager logInfo - 
> *Scheduling renewal in 1921535501304.2 h.*
> [hdfs Exceptin log in HadoopDelegationTokenManager  
> |https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png]
> !https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37787) Long running Spark Job(Spark ThriftServer) throw HDFS_DELEGATE_TOKEN not found in cache Exception

2022-01-03 Thread huangzhir (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17468329#comment-17468329
 ] 

huangzhir commented on SPARK-37787:
---

This is because
HadoopFSDelegationTokenProvider.obtaindelegationtokens returns None when an 
exception is encountered. HadoopDelegationTokenProvider that cannot be renewed 
also returns None. If the obtaindelegationtokens of all 
HadoopDelegationTokenProvider return None,HadoopDelegationTokenManager In 
obtaindelegationtokens, call foldleft (long. Maxvalue) (math. Min) to return 
long.MaxValue. Therefore, the log shows "scheduling renewal in 1921535501304.2 
H"
 
 

> Long running Spark Job(Spark ThriftServer) throw HDFS_DELEGATE_TOKEN not 
> found in cache Exception
> -
>
> Key: SPARK-37787
> URL: https://issues.apache.org/jira/browse/SPARK-37787
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.0, 3.1.0, 3.2.0
> Environment: spark3 thrift server
>  
> spark-default.conf
> spark.hadoop.fs.hdfs.impl.disable.cache=true
>  
>Reporter: huangzhir
>Priority: Major
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> *HDFS_DELEGATE_TOKEN not found in cache exception* occurs when accessing 
> spark thriftserver service. The specific exception is as follows:
> [Exception Log | 
> https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png]
> !https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png!
> the HadoopDelegationTokenManager thow Exception when renewal 
> DelegationToken,as follows:
>  
> We are also find HadoopDelegationTokenManager log as follows:
> INFO [Credential Renewal Thread] 
> org.apache.spark.deploy.security.HadoopDelegationTokenManager logInfo - 
> *Scheduling renewal in 1921535501304.2 h.*
> [hdfs Exceptin log in HadoopDelegationTokenManager  
> |https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png]
> !https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-37787) Long running Spark Job(Spark ThriftServer) throw HDFS_DELEGATE_TOKEN not found in cache Exception

2021-12-30 Thread huangzhir (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhir updated SPARK-37787:
--
Remaining Estimate: 168h  (was: 24h)
 Original Estimate: 168h  (was: 24h)

> Long running Spark Job(Spark ThriftServer) throw HDFS_DELEGATE_TOKEN not 
> found in cache Exception
> -
>
> Key: SPARK-37787
> URL: https://issues.apache.org/jira/browse/SPARK-37787
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.0, 3.1.0, 3.2.0
> Environment: spark3 thrift server
>  
> spark-default.conf
> spark.hadoop.fs.hdfs.impl.disable.cache=true
>  
>Reporter: huangzhir
>Priority: Major
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> *HDFS_DELEGATE_TOKEN not found in cache exception* occurs when accessing 
> spark thriftserver service. The specific exception is as follows:
> [Exception Log | 
> https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png]
> !https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png!
> the HadoopDelegationTokenManager thow Exception when renewal 
> DelegationToken,as follows:
>  
> We are also find HadoopDelegationTokenManager log as follows:
> INFO [Credential Renewal Thread] 
> org.apache.spark.deploy.security.HadoopDelegationTokenManager logInfo - 
> *Scheduling renewal in 1921535501304.2 h.*
> [hdfs Exceptin log in HadoopDelegationTokenManager  
> |https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png]
> !https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37787) Long running Spark Job(Spark ThriftServer) throw HDFS_DELEGATE_TOKEN not found in cache Exception

2021-12-30 Thread huangzhir (Jira)
huangzhir created SPARK-37787:
-

 Summary: Long running Spark Job(Spark ThriftServer) throw 
HDFS_DELEGATE_TOKEN not found in cache Exception
 Key: SPARK-37787
 URL: https://issues.apache.org/jira/browse/SPARK-37787
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 3.2.0, 3.1.0, 3.0.0
 Environment: spark3 thrift server
 
spark-default.conf
spark.hadoop.fs.hdfs.impl.disable.cache=true
 
Reporter: huangzhir


*HDFS_DELEGATE_TOKEN not found in cache exception* occurs when accessing spark 
thriftserver service. The specific exception is as follows:

[Exception Log | 
https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png]

!https://raw.githubusercontent.com/huangzhir/Temp/main/image-3.png!

the HadoopDelegationTokenManager thow Exception when renewal DelegationToken,as 
follows:

 

We are also find HadoopDelegationTokenManager log as follows:
INFO [Credential Renewal Thread] 
org.apache.spark.deploy.security.HadoopDelegationTokenManager logInfo - 
*Scheduling renewal in 1921535501304.2 h.*

[hdfs Exceptin log in HadoopDelegationTokenManager  
|https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png]

!https://raw.githubusercontent.com/huangzhir/Temp/main/spark%20thriftserver%20Exceptin.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org