[jira] [Commented] (SPARK-12836) spark enable both driver run executor & write to HDFS
[ https://issues.apache.org/jira/browse/SPARK-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197120#comment-15197120 ] Lior Chaga commented on SPARK-12836: I used the --no-switch_user mesos config, and it worked. Writing to hadoop was with HADOOP_USER_NAME, while spark executors were running with the mesos-slave user permissions. > spark enable both driver run executor & write to HDFS > - > > Key: SPARK-12836 > URL: https://issues.apache.org/jira/browse/SPARK-12836 > Project: Spark > Issue Type: Bug > Components: Mesos, Scheduler, Spark Core >Affects Versions: 1.6.0 > Environment: HADOOP_USER_NAME=qhstats > SPARK_USER=root >Reporter: astralidea > Labels: features > > when spark set env HADOOP_USER_NAME CoarseMesosSchedulerBackend will set > sparkuser from this env, but in my cluster run spark must be root, write HDFS > must set HADOOP_USER_NAME, need a configuration set run executor by root & > write hdfs by another. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-12836) spark enable both driver run executor & write to HDFS
[ https://issues.apache.org/jira/browse/SPARK-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101630#comment-15101630 ] Apache Spark commented on SPARK-12836: -- User 'Astralidea' has created a pull request for this issue: https://github.com/apache/spark/pull/10770 > spark enable both driver run executor & write to HDFS > - > > Key: SPARK-12836 > URL: https://issues.apache.org/jira/browse/SPARK-12836 > Project: Spark > Issue Type: Bug > Components: Mesos, Scheduler, Spark Core >Affects Versions: 1.6.0 > Environment: HADOOP_USER_NAME=qhstats > SPARK_USER=root >Reporter: astralidea > Labels: features > > when spark set env HADOOP_USER_NAME CoarseMesosSchedulerBackend will set > sparkuser from this env, but in my cluster run spark must be root, write HDFS > must set HADOOP_USER_NAME, need a configuration set run executor by root & > write hdfs by another. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org