[ 
https://issues.apache.org/jira/browse/SPARK-33485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Jiao updated SPARK-33485:
------------------------------
    Description: 
My spark application accessing kerberized hdfs is running in kubernetes 
cluster, but the application log shows: "Setting 
spark.hadoop.yarn.resourcemanager.principal to xxx(which is my kerberos 
principal)":

... 

+ SPARK_CLASSPATH='/opt/hadoop/conf::/opt/spark/jars/*'
+ case "$1" in
+ shift 1
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf 
"spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@")
+ exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf 
spark.driver.bindAddress=10.244.1.67 --deploy-mode client --properties-file 
/opt/spark/conf/spark.properties --class WordCount 
local:///opt/spark/jars/WordCount-1.0-SNAPSHOT.jar
Setting spark.hadoop.yarn.resourcemanager.principal to joan

... 

 

I don't know why yarn authentication is needed here? Anyone can help?Thanks!

The log and my spark project is attached blow for reference.

 

  was:
My spark application accessing kerberized hdfs is running in kubernetes 
cluster, but the application log shows: "Setting 
spark.hadoop.yarn.resourcemanager.principal to xxx(which is my kerberos 
principal)" , I don't know why yarn authentication is needed here? Anyone can 
help?Thanks!

The log and my spark project is attached blow for reference.


> running spark application in kerbernetes,bug the application log shows yarn 
> authentications 
> --------------------------------------------------------------------------------------------
>
>                 Key: SPARK-33485
>                 URL: https://issues.apache.org/jira/browse/SPARK-33485
>             Project: Spark
>          Issue Type: Bug
>          Components: Kubernetes
>    Affects Versions: 3.0.0
>            Reporter: Yuan Jiao
>            Priority: Major
>         Attachments: application.log, project.rar
>
>
> My spark application accessing kerberized hdfs is running in kubernetes 
> cluster, but the application log shows: "Setting 
> spark.hadoop.yarn.resourcemanager.principal to xxx(which is my kerberos 
> principal)":
> ... 
> + SPARK_CLASSPATH='/opt/hadoop/conf::/opt/spark/jars/*'
> + case "$1" in
> + shift 1
> + CMD=("$SPARK_HOME/bin/spark-submit" --conf 
> "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client 
> "$@")
> + exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf 
> spark.driver.bindAddress=10.244.1.67 --deploy-mode client --properties-file 
> /opt/spark/conf/spark.properties --class WordCount 
> local:///opt/spark/jars/WordCount-1.0-SNAPSHOT.jar
> Setting spark.hadoop.yarn.resourcemanager.principal to joan
> ... 
>  
> I don't know why yarn authentication is needed here? Anyone can help?Thanks!
> The log and my spark project is attached blow for reference.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to