Github user foxish commented on the issue:

    https://github.com/apache/spark/pull/19717
  
    @liyinan926:
    
    > Actually given that in our implementations the executors talk to the 
driver through a Kubernetes headless service that is only created by the 
submission client. None of the two cases above would work without careful and 
tricky configuration hacks. 
    
    I think we should move the headless service creation into the backend code 
- anything essential for the backend to run shouldn't depend on the submission 
client/steps.  cc/ @mccheah for comment
    
    > So creating a SparkContext directly with a k8s url should not be allowed 
until we find a way to make it work. 
    
    Keeping that possible will let us plug Jupyter/spark-shell (running 
in-cluster for now). Disabling it completely will create an unnecessary 
dependency on spark-submit which IMO is undesirable. We do want people to be 
able to programmatically construct a spark context that can point at a k8s 
cluster in their code I think. `kubernetes.default.svc` is the in-cluster 
kube-dns provisioned DNS address that should point to the API server - its 
availability is a good indicator that we're in a place that can address other 
pods - so, that can be used to detect when we don't want to let users try and 
fail client mode. 
    



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to