[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher
[ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207722#comment-16207722 ] Pascal GILLET edited comment on SPARK-19606 at 10/17/17 2:39 PM: - * _If "spark.mesos.constraints" is passed with the job then it will wind up overriding the value specified in the "driverDefault" property._: *False*. "spark.mesos.constraints" still applies for executors only, while the "driverDefault" will apply for the driver. * _If "spark.mesos.constraints" is not passed with the job, then the value specified in the "driverDefault" property will get passed to the executors - which we definitely don't want._: *True* OK then to add the "spark.mesos.constraints.driver" property. was (Author: pgillet): * _If "spark.mesos.constraints" is passed with the job then it will wind up overriding the value specified in the "driverDefault" property._: False. "spark.mesos.constraints" still applies for executors only, while the "driverDefault" will apply for the driver. * _If "spark.mesos.constraints" is not passed with the job, then the value specified in the "driverDefault" property will get passed to the executors - which we definitely don't want._: True OK then to add the "spark.mesos.constraints.driver" property. > Support constraints in spark-dispatcher > --- > > Key: SPARK-19606 > URL: https://issues.apache.org/jira/browse/SPARK-19606 > Project: Spark > Issue Type: New Feature > Components: Mesos >Affects Versions: 2.1.0 >Reporter: Philipp Hoffmann > > The `spark.mesos.constraints` configuration is ignored by the > spark-dispatcher. The constraints need to be passed in the Framework > information when registering with Mesos. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher
[ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207722#comment-16207722 ] Pascal GILLET edited comment on SPARK-19606 at 10/17/17 2:38 PM: - * _If "spark.mesos.constraints" is passed with the job then it will wind up overriding the value specified in the "driverDefault" property._: False. "spark.mesos.constraints" still applies for executors only, while the "driverDefault" will apply for the driver. * _If "spark.mesos.constraints" is not passed with the job, then the value specified in the "driverDefault" property will get passed to the executors - which we definitely don't want._: True OK then to add the "spark.mesos.constraints.driver" property. was (Author: pgillet): * _If "spark.mesos.constraints" is passed with the job then it will wind up overriding the value specified in the "driverDefault" property. _: False. "spark.mesos.constraints" still applies for executors only, while the "driverDefault" will apply for the driver. * _If "spark.mesos.constraints" is not passed with the job, then the value specified in the "driverDefault" property will get passed to the executors - which we definitely don't want._: True OK then to add the "spark.mesos.constraints.driver" property. > Support constraints in spark-dispatcher > --- > > Key: SPARK-19606 > URL: https://issues.apache.org/jira/browse/SPARK-19606 > Project: Spark > Issue Type: New Feature > Components: Mesos >Affects Versions: 2.1.0 >Reporter: Philipp Hoffmann > > The `spark.mesos.constraints` configuration is ignored by the > spark-dispatcher. The constraints need to be passed in the Framework > information when registering with Mesos. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher
[ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076510#comment-16076510 ] Pascal GILLET edited comment on SPARK-19606 at 9/26/17 11:37 AM: - +1 but through 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'! I tested the patch and it works well! As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher. As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything). *BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors. For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: - The dispatcher is given Mesos resources only for drivers - Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors - If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation. - A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors. The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By definition, it allows to "_set default properties for drivers submitted through the dispatcher_". Thus, I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'. What do you guys think? was (Author: pgillet): +1 but through 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'! I tested the patch and it works well! As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher. As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything). *BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors. For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: - The dispatcher is given Mesos resources only for drivers - Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors - If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation. - A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors. The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By definition, it allows to "_set default properties for drivers submitted through the dispatcher_". I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'. What do you guys think? > Support constraints in spark-dispatcher > --- > > Key: SPARK-19606 > URL: https://issues.apache.org/jira/browse/SPARK-19606 > Project: Spark > Issue Type: New Feature > Components: Mesos >Affects Versions: 2.1.0 >Reporter: Philipp Hoffmann > > The `spark.mesos.constraints` configuration is ignored by the > spark-dispatcher. The constraints need to be passed in the Framework > information when registering with Mesos. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher
[ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076510#comment-16076510 ] Pascal GILLET edited comment on SPARK-19606 at 9/26/17 10:27 AM: - +1 but through 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'! I tested the patch and it works well! As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher. As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything). *BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors. For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: - The dispatcher is given Mesos resources only for drivers - Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors - If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation. - A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors. The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By definition, it allows to "set default properties for drivers submitted through the dispatcher". I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'. What do you guys think? was (Author: pgillet): +1 but with 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'! I tested the patch and it works well! As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher. As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything). *BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors. For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: - The dispatcher is given Mesos resources only for drivers - Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors - If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation. - A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors. The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By definition, it allows to set default properties for drivers submitted through the dispatcher. I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'. What do you guys think? > Support constraints in spark-dispatcher > --- > > Key: SPARK-19606 > URL: https://issues.apache.org/jira/browse/SPARK-19606 > Project: Spark > Issue Type: New Feature > Components: Mesos >Affects Versions: 2.1.0 >Reporter: Philipp Hoffmann > > The `spark.mesos.constraints` configuration is ignored by the > spark-dispatcher. The constraints need to be passed in the Framework > information when registering with Mesos. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher
[ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076510#comment-16076510 ] Pascal GILLET edited comment on SPARK-19606 at 9/26/17 10:27 AM: - +1 but through 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'! I tested the patch and it works well! As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher. As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything). *BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors. For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: - The dispatcher is given Mesos resources only for drivers - Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors - If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation. - A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors. The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By definition, it allows to "_set default properties for drivers submitted through the dispatcher_". I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'. What do you guys think? was (Author: pgillet): +1 but through 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'! I tested the patch and it works well! As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher. As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything). *BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors. For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: - The dispatcher is given Mesos resources only for drivers - Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors - If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation. - A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors. The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By definition, it allows to "set default properties for drivers submitted through the dispatcher". I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'. What do you guys think? > Support constraints in spark-dispatcher > --- > > Key: SPARK-19606 > URL: https://issues.apache.org/jira/browse/SPARK-19606 > Project: Spark > Issue Type: New Feature > Components: Mesos >Affects Versions: 2.1.0 >Reporter: Philipp Hoffmann > > The `spark.mesos.constraints` configuration is ignored by the > spark-dispatcher. The constraints need to be passed in the Framework > information when registering with Mesos. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher
[ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076510#comment-16076510 ] Pascal GILLET edited comment on SPARK-19606 at 9/26/17 10:19 AM: - +1 but with 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'! I tested the patch and it works well! As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher. As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything). *BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors. For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: - The dispatcher is given Mesos resources only for drivers - Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors - If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation. - A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors. The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By definition, it allows to set default properties for drivers submitted through the dispatcher. I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'. What do you guys think? was (Author: pgillet): +1 but with 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'! I tested the patch and it works well! As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher. As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything). *BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors. For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: - The dispatcher is given Mesos resources only for drivers - Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors - If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation. - A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors. The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By defintion, it allows to set default properties for drivers submitted through the dispatcher. I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'. What do you guys think? > Support constraints in spark-dispatcher > --- > > Key: SPARK-19606 > URL: https://issues.apache.org/jira/browse/SPARK-19606 > Project: Spark > Issue Type: New Feature > Components: Mesos >Affects Versions: 2.1.0 >Reporter: Philipp Hoffmann > > The `spark.mesos.constraints` configuration is ignored by the > spark-dispatcher. The constraints need to be passed in the Framework > information when registering with Mesos. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher
[ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076510#comment-16076510 ] Pascal GILLET edited comment on SPARK-19606 at 9/26/17 10:17 AM: - +1 but with 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'! I tested the patch and it works well! As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher. As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything). *BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors. For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: - The dispatcher is given Mesos resources only for drivers - Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors - If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation. - A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors. The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By defintion, it allows to set default properties for drivers submitted through the dispatcher. I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'. What do you guys think? was (Author: pgillet): +1 Need to run Spark drivers and executors on 2 exclusive types of Mesos slaves through a Mesos constraint. What about the spark.mesos.dispatcher.driverDefault.spark.mesos.constraints property ? > Support constraints in spark-dispatcher > --- > > Key: SPARK-19606 > URL: https://issues.apache.org/jira/browse/SPARK-19606 > Project: Spark > Issue Type: New Feature > Components: Mesos >Affects Versions: 2.1.0 >Reporter: Philipp Hoffmann > > The `spark.mesos.constraints` configuration is ignored by the > spark-dispatcher. The constraints need to be passed in the Framework > information when registering with Mesos. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher
[ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076510#comment-16076510 ] Pascal GILLET edited comment on SPARK-19606 at 7/6/17 2:13 PM: --- +1 Need to run Spark drivers and executors on 2 exclusive types of Mesos slaves through a Mesos constraint. What about the spark.mesos.dispatcher.driverDefault.spark.mesos.constraints property ? was (Author: pgillet): +1 Need to run Spark drivers and executors on 2 exclusive types of Mesos slaves through a Mesos constraint. > Support constraints in spark-dispatcher > --- > > Key: SPARK-19606 > URL: https://issues.apache.org/jira/browse/SPARK-19606 > Project: Spark > Issue Type: New Feature > Components: Mesos >Affects Versions: 2.1.0 >Reporter: Philipp Hoffmann > > The `spark.mesos.constraints` configuration is ignored by the > spark-dispatcher. The constraints need to be passed in the Framework > information when registering with Mesos. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher
[ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16031146#comment-16031146 ] Laurent Hoss edited comment on SPARK-19606 at 5/31/17 1:28 PM: --- +1 hopefully this PR gets committer attention was (Author: laurentcoder): +1 hopefully this PR gets comitter attention > Support constraints in spark-dispatcher > --- > > Key: SPARK-19606 > URL: https://issues.apache.org/jira/browse/SPARK-19606 > Project: Spark > Issue Type: New Feature > Components: Mesos >Affects Versions: 2.1.0 >Reporter: Philipp Hoffmann > > The `spark.mesos.constraints` configuration is ignored by the > spark-dispatcher. The constraints need to be passed in the Framework > information when registering with Mesos. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org