Github user echarles commented on the issue:

    https://github.com/apache/spark/pull/21748
  
    Thx @mccheah  and @liyinan926. It now works with an headless service using 
`spark.driver.host=spark-driver-service` and 
`spark.kubernetes.driver.pod.name=spark-pod`
    
    Two more questions:
    
    1. I still have an issue with Out-Cluster: Executors are also killed on 
start. Is there something I should configure to make it work?
    2. With the PR I had developer on the fork (long time ago), I didn't have 
to create by myself the headless service. I have looked at the difference 
between fork and apache, but don't see where it comes from. In short, as a 
user, it would be better if I hadn't to create by myself the headless service, 
especially if we consider an notebook environment where a WEB Server running in 
a pod would launch multiple REPL (in client mode), I see practical issue to 
manage the assignment of the exposed ports (you can not assign a port which is 
already assigned). Thoughts?
    
    For info, the specs of the headless service which is needed to run 
In-Cluster.
    
    ```
    apiVersion: v1
    kind: Service
    metadata:
      name: spark-driver-service
    spec:
      clusterIP: None
      ports:
      - port: 7077
        name: spark-driver-port
      - port: 10000
        name: spark-driver-blockmanager-block
      selector:
        run: spark-pod
    ```



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to