[GitHub] [spark] liangyu-1 commented on pull request #42295: [SPARK-44581][YARN] Fix the bug that ShutdownHookManager get wrong hadoop user group information

2023-08-08 Thread via GitHub
liangyu-1 commented on PR #42295: URL: https://github.com/apache/spark/pull/42295#issuecomment-1669309268 It's the problem that I changed my action permissions, I have reset it and everything is OK now. thanks a lot @yaooqinn -- This is an automated message from the Apache Git

[GitHub] [spark] liangyu-1 commented on pull request #42295: [SPARK-44581][YARN] Fix the bug that ShutdownHookManager get wrong hadoop user group information

2023-08-08 Thread via GitHub
liangyu-1 commented on PR #42295: URL: https://github.com/apache/spark/pull/42295#issuecomment-1669267715 Hi, @yaooqinn I encountered this problem and have no idea about what to do next: Ref: SPARK-44581 SHA: e0a6db4d5d04c1a43e91e13c263be47b65441a9b TypeError: Cannot read

[GitHub] [spark] liangyu-1 commented on pull request #42295: [SPARK-44581][YARN] Fix the bug that ShutdownHookManager get wrong hadoop user group information

2023-08-08 Thread via GitHub
liangyu-1 commented on PR #42295: URL: https://github.com/apache/spark/pull/42295#issuecomment-1669082795 Hi @yaooqinn, I have changed my ci, please help me check if it's ok now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to

[GitHub] [spark] liangyu-1 commented on pull request #42295: [SPARK-44581][YARN] Fix the bug that ShutdownHookManager get wrong hadoop user group information

2023-08-07 Thread via GitHub
liangyu-1 commented on PR #42295: URL: https://github.com/apache/spark/pull/42295#issuecomment-1668845332 I moved the ApplicationMaster instantiating and assignment inside the doAs block, and I rebuild the project and test it on my cluster, the shutdown hook thread now has the correct

[GitHub] [spark] liangyu-1 commented on pull request #42295: [SPARK-44581][YARN] Fix the bug that ShutdownHookManager get wrong hadoop user group information

2023-08-07 Thread via GitHub
liangyu-1 commented on PR #42295: URL: https://github.com/apache/spark/pull/42295#issuecomment-1667554705 The main cause is that ShutdownHook thread is created before we create the ugi in ApplicationMaster. When we set the config key _"hadoop.security.credential.provider.path"_, the

[GitHub] [spark] liangyu-1 commented on pull request #42295: [SPARK-44581][YARN]Fix the bug that ShutdownHookManager get wrong hadoop user group information

2023-08-04 Thread via GitHub
liangyu-1 commented on PR #42295: URL: https://github.com/apache/spark/pull/42295#issuecomment-1665080555 > I see where the hook added. > > But I have checked some of our online apps versioned 3.1 and 3.3, the stagings are deleted successfully. Is it a 3.2-specific issue? I

[GitHub] [spark] liangyu-1 commented on pull request #42295: [SPARK-44581][YARN]Fix the bug that ShutdownHookManager get wrong hadoop user group information

2023-08-03 Thread via GitHub
liangyu-1 commented on PR #42295: URL: https://github.com/apache/spark/pull/42295#issuecomment-1664932235 > The staging directory is cleaned automatically by Spark, why do you even need this hook? @yaooqinn Spark cleans the staging directory in this Hook, in spark 2.4

[GitHub] [spark] liangyu-1 commented on pull request #42295: [SPARK-44581][YARN]Fix the bug that ShutdownHookManager get wrong hadoop user group information

2023-08-03 Thread via GitHub
liangyu-1 commented on PR #42295: URL: https://github.com/apache/spark/pull/42295#issuecomment-1663602009 cc @mridulm @tgravescs @HeartSaVioR -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the