[ 
https://issues.apache.org/jira/browse/SPARK-41073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17632209#comment-17632209
 ] 

zhengchenyu commented on SPARK-41073:
-------------------------------------

We must add a valid credentials to jobConf.

For now, I think sql in thriftserver or local mode can't get a valid 
credentials.

I have two proposal:

proposal A: set a global credentials for hadoop.

proposal B: extract HadoopDelegationTokenManager from 
CoarseGrainedSchedulerBackend. (Note:I think local spark also wanna global 
credentials)

I prefer B.

But A is simple, I have submit SPARK-41073.proposal.A.draft.001.patch, I solve 
the problem, but not graceful.

[~vanzin] [~xkrogen]   Can you give me some suggesstion? 

> Spark ThriftServer generate huge amounts of DelegationToken
> -----------------------------------------------------------
>
>                 Key: SPARK-41073
>                 URL: https://issues.apache.org/jira/browse/SPARK-41073
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.0.1
>            Reporter: zhengchenyu
>            Priority: Major
>         Attachments: SPARK-41073.proposal.A.draft.001.patch
>
>
> In our cluster, zookeeper nearly crashed. I found the znodes of 
> /zkdtsm/ZKDTSMRoot/ZKDTSMTokensRoot increased quickly. 
> After some research, I found some sql running on spark-thriftserver obtain 
> huge amounts of DelegationToken.
> The reason is that in these spark-sql, every hive parition acquire a 
> different delegation token. 
> And HadoopRDDs in thriftserver can't share credentials from 
> CoarseGrainedSchedulerBackend::delegationTokens, we must share it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to