Github user mgummelt commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14511#discussion_r73980410
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
 ---
    @@ -358,14 +358,12 @@ private[spark] class MesosClusterScheduler(
             .orElse(desc.command.environment.get("SPARK_EXECUTOR_URI"))
       }
     
    -  private def getDriverEnvironment(desc: MesosDriverDescription): 
Environment = {
    -    val env = {
    -      val executorOpts = desc.conf.getAll.map { case (k, v) => s"-D$k=$v" 
}.mkString(" ")
    -      val executorEnv = Map("SPARK_EXECUTOR_OPTS" -> executorOpts)
    --- End diff --
    
    Thanks.
    
    The one way to pass configs through.  The only thing I would change in your 
comment is "to all executors", because most spark properties aren't set on 
executors.  They're set on drivers, and often influence how the drivers launch 
the executors.
    
    That single way is now in `generateCmdOption`, which translates `desc.conf` 
into `--conf` parameters for the driver.  Removing this is removing a second, 
redundant way of setting the same configs.
    
    There is another way users can pass configs through, which is through the 
deprecated `SPARK_SUBMIT_OPTS` env var.  That's passed through just as all env 
vars are passed through, which is in this method (`desc.command.environment`)
    
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to