Github user jerryshao commented on the issue:

    https://github.com/apache/spark/pull/19121
  
    No, I don't agree with you.
    
    SPARK_USER is set in SparkContext with driver's current UGI and this env 
variable will be propagated to executors to create executor's UGI with the same 
user in driver. 
    
    For example, if your standalone cluster is started with user "spark", and 
you submit a Spark application with user "foo" in the gateway, so all the 
executors should use "foo" to access Hadoop, but with your changes, executors 
will use "spark" to communicate with Hadoop, since executor process is forked 
with user "spark" by worker, it is not correct.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to