[ 
https://issues.apache.org/jira/browse/SPARK-24793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16665578#comment-16665578
 ] 

Stavros Kontopoulos edited comment on SPARK-24793 at 10/26/18 7:15 PM:
-----------------------------------------------------------------------

>From a quick glance you can't just use the k8s backend to check status of the 
>driver. Standalone and mesos mode can support this because they are using the 
>rest cient which is a common api always available at spark core. We cant add 
>k8s dependency by default at that point of code. You then either use 
>reflection if k8s master is passed to load a class from the backend side or 
>query the K8s api server by extending that rest client and mapping pod status 
>to drivers status to keep UX the same.


was (Author: skonto):
>From a quick glance you can't just use the k8s backend to check status of the 
>driver. Standalone and mesos mode can support this because they are using the 
>rest cient which is a common api always available at spark core. We cant add 
>k8s dependency by default at that point of code. You then either use 
>reflection or hit the api server with a rest api.

> Make spark-submit more useful with k8s
> --------------------------------------
>
>                 Key: SPARK-24793
>                 URL: https://issues.apache.org/jira/browse/SPARK-24793
>             Project: Spark
>          Issue Type: Improvement
>          Components: Kubernetes
>    Affects Versions: 2.3.0
>            Reporter: Anirudh Ramanathan
>            Assignee: Anirudh Ramanathan
>            Priority: Major
>
> Support controlling the lifecycle of Spark Application through spark-submit. 
> For example:
> {{ 
>   --kill app_name           If given, kills the driver specified.
>   --status app_name      If given, requests the status of the driver 
> specified.
> }}
> Potentially also --list to list all spark drivers running.
> Given that our submission client can actually launch jobs into many different 
> namespaces, we'll need an additional specification of the namespace through a 
> --namespace flag potentially.
> I think this is pretty useful to have instead of forcing a user to use 
> kubectl to manage the lifecycle of any k8s Spark Application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to