Github user lucashu1 commented on the issue:

    https://github.com/apache/spark/pull/21092
  
    Sorry in advance if this is the wrong place to be asking this! 
    
    Does this PR mean that we'll be able to create SparkContexts using 
PySpark's 
[`SparkSession.Builder`](https://spark.apache.org/docs/preview/api/python/pyspark.sql.html#pyspark.sql.SparkSession.Builder)
 with `master` set to `k8s://<...>:<...>`, and have the resulting jobs run on 
spark-on-k8s, instead of on local/standalone? 
    
    E.g.:
    ```
    from pyspark.sql import SparkSession
    spark = 
SparkSession.builder.master('k8s://https://kubernetes:443').getOrCreate()
    ```
    
    I'm trying to use PySpark in a Jupyter notebook that's running inside a 
Kubernetes pod, and have it use spark-on-k8s instead of resorting to using 
`local[*]` as `master`. 
    
    Till now, I've been getting an error saying that:
    
    > Error: Python applications are currently not supported for Kubernetes.
    
    whenever I try to use `k8s://<...>` as `master`.
    
    Thanks!


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to