[ 
https://issues.apache.org/jira/browse/SPARK-18278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15747151#comment-15747151
 ] 

Shuai Lin commented on SPARK-18278:
-----------------------------------


bq. If I had to choose between maintaining a fork versus cleaning up the 
scheduler to make a public API, I would choose the latter in the interest of 
clarifying the relationship between the K8s effort and the mainline project, as 
well as for making the scheduler code cleaner in general. 

Adding support for pluggable scheduler backend in spark is cool. AFAIK there 
are some custom scheduler backends for spark, and they are using forked 
versions of spark due to the lack of pluggable scheduler backend support:

- [Two sigma's spark fork|https://github.com/twosigma/spark], which added 
scheduler support for their [Cook Scheduler|https://github.com/twosigma/Cook]
- IBM also has a custom "Spark Session Scheduler", [which they shared in last 
month's MesosCon 
Asia|https://mesosconasia2016.sched.com/event/8Tut/spark-session-scheduler-the-key-to-guaranteed-sla-of-spark-applications-for-multiple-users-on-mesos-yong-feng-ibm-canada-ltd]

bq. we could include the K8s scheduler in the Apache releases as an 
experimental feature, ignore its bugs and test failures for the next few 
releases (that is, problems in the K8s-related code should never block releases)

I'm afraid that doesn't sound like good practice


> Support native submission of spark jobs to a kubernetes cluster
> ---------------------------------------------------------------
>
>                 Key: SPARK-18278
>                 URL: https://issues.apache.org/jira/browse/SPARK-18278
>             Project: Spark
>          Issue Type: Umbrella
>          Components: Build, Deploy, Documentation, Scheduler, Spark Core
>            Reporter: Erik Erlandson
>         Attachments: SPARK-18278 - Spark on Kubernetes Design Proposal.pdf
>
>
> A new Apache Spark sub-project that enables native support for submitting 
> Spark applications to a kubernetes cluster.   The submitted application runs 
> in a driver executing on a kubernetes pod, and executors lifecycles are also 
> managed as pods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to