[ https://issues.apache.org/jira/browse/SPARK-24429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Wenchen Fan resolved SPARK-24429. --------------------------------- Resolution: Not A Problem > Add support for spark.driver.extraJavaOptions in cluster mode for Spark on K8s > ------------------------------------------------------------------------------ > > Key: SPARK-24429 > URL: https://issues.apache.org/jira/browse/SPARK-24429 > Project: Spark > Issue Type: Improvement > Components: Kubernetes > Affects Versions: 2.4.0 > Reporter: Stavros Kontopoulos > Priority: Major > > > Right now in cluster mode only extraJavaOptions targeting the executor are > set. > According to the implementation and the docs: > "In client mode, this config must not be set through the {{SparkConf}} > directly in your application, because the driver JVM has already started at > that point. Instead, please set this through the {{--driver-java-options}} > command line option or in your default properties file." > A typical driver launch in cluster mode will eventually use client mode to > run the Spark-submit and looks like: > "/usr/lib/jvm/java-1.8-openjdk/bin/java -cp > /opt/spark/conf/:/opt/spark/jars/* -Xmx1g org.apache.spark.deploy.SparkSubmit > --deploy-mode client --conf spark.driver.bindAddress=9.0.7.116 > --properties-file /opt/spark/conf/spark.properties --class > org.apache.spark.examples.SparkPi spark-internal 10000" > Also at the entrypoint.sh file there is no management of the driver's java > opts. > We propose to set an env var to pass the extra java opts to the driver (like > in the case of the executor), rename the env vars in the container as the one > for the executor is a bit misleading, and use --driver-java-options to pass > the required options. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org