bajiaolong commented on issue #15637: URL: https://github.com/apache/dolphinscheduler/issues/15637#issuecomment-1965745050
> Does this happen in the spark task? Isn't the spark task executed by `hadoop` user? I just test sub_process task, this work well. I also encountered this problem, what he should mean is that when all processes run, the default tenant is default. 1. There is only one tenant in my tenant management: appuser;  2. When the process is to run, this tenant must be set up every time, and the default default is changed to the tenant appuser in my system.   -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
