Hello Apache Spark community,

I'm currently trying to run Spark Connect Server on Kubernetes in Cluster
Mode and facing some challenges. Any guidance or hints would be greatly
appreciated.

## Environment:
Apache Spark version: 3.4.1
Kubernetes version:  1.23
Command executed:
 /opt/spark/sbin/start-connect-server.sh \
   --packages
org.apache.spark:spark-connect_2.13:3.4.1,org.apache.iceberg:iceberg-spark-runtime-3.4_2.13:1.3.1...
Note that I'm running it with the environment variable SPARK_NO_DAEMONIZE=1.

## Issue:
When I connect from an external Python client and run scripts, it operates
in Local Mode instead of the expected Cluster Mode.

## Expected Behavior:
When connecting from a Python client to the Spark Connect Server, I expect
it to run in Cluster Mode.

If anyone has any insights, advice, or has faced a similar issue, I'd be
grateful for your feedback.
Thank you in advance.

Reply via email to