[ 
https://issues.apache.org/jira/browse/SPARK-23857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stavros Kontopoulos updated SPARK-23857:
----------------------------------------
    Description: 
Users could submit their jobs from an external to the cluster host which may 
not have the required keytab locally.  Moreover, in cluster mode it does not 
make much sense to reference a local resource unless this is uploaded/stored 
somewhere in the cluster. For yarn HDFS is used, on mesos and certainly on 
DC/OS right now the secret store is used for storing secrets and consequently 
keytabs. There is a check 
[here|https://github.com/apache/spark/blob/7cf9fab33457ccc9b2d548f15dd5700d5e8d08ef/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L387]
 that makes spark submit difficult to use in such deployment scenarios.

On DC/OS the workaround is to directly submit to the mesos dispatcher rest api 
by passing the spark.yarn.tab property pointing to a path within the driver's 
container where the keytab will be mounted after its fetched from the secret 
store, at container's launch time. Target is to allow spark submit be flexible 
enough for mesos in cluster mode, as DC/OS users often want to deploy using 
that.

  was:
Users could submit their jobs from an external to the cluster host which may 
not have the required keytab locally.  Moreover, in cluster mode it does not 
make much sense to reference a local resource unless this is uploaded/stored 
somewhere in the cluster. For yarn HDFS is used, on mesos and certainly on 
DC/OS right now the secret store is used for storing secrets and consequently 
keytabs. There is a check 
[here|https://github.com/apache/spark/blob/7cf9fab33457ccc9b2d548f15dd5700d5e8d08ef/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L387]
 that makes spark submit difficult to use in such deployment scenarios.

On DC/OS the workaround is to directly submit to the mesos dispatcher rest api 
by passing the spark.yarn.tab property pointing to a path within the driver's 
container where the keytab will be mounted after its fetched from the secret 
store, at cotnainer's launch time. Target is to allow spark submit be flexible 
enough for mesos in cluster mode, as DC/OS users often want to deploy using 
that.


> In mesos cluster mode keytab Spark submit requires the keytab to be available 
> on the local file system.
> -------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-23857
>                 URL: https://issues.apache.org/jira/browse/SPARK-23857
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos
>    Affects Versions: 2.3.0
>            Reporter: Stavros Kontopoulos
>            Priority: Minor
>
> Users could submit their jobs from an external to the cluster host which may 
> not have the required keytab locally.  Moreover, in cluster mode it does not 
> make much sense to reference a local resource unless this is uploaded/stored 
> somewhere in the cluster. For yarn HDFS is used, on mesos and certainly on 
> DC/OS right now the secret store is used for storing secrets and consequently 
> keytabs. There is a check 
> [here|https://github.com/apache/spark/blob/7cf9fab33457ccc9b2d548f15dd5700d5e8d08ef/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L387]
>  that makes spark submit difficult to use in such deployment scenarios.
> On DC/OS the workaround is to directly submit to the mesos dispatcher rest 
> api by passing the spark.yarn.tab property pointing to a path within the 
> driver's container where the keytab will be mounted after its fetched from 
> the secret store, at container's launch time. Target is to allow spark submit 
> be flexible enough for mesos in cluster mode, as DC/OS users often want to 
> deploy using that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to