[ https://issues.apache.org/jira/browse/SPARK-27872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16851026#comment-16851026 ]
Erik Erlandson commented on SPARK-27872: ---------------------------------------- [~skonto], executors were never given a service account (or, "default") mostly on the principle of least permissions, however I see no problem with providing them the same service account as the driver if it is required for some purpose. Definitely feel free to submit a PR for review. > Driver and executors use a different service account breaking pull secrets > -------------------------------------------------------------------------- > > Key: SPARK-27872 > URL: https://issues.apache.org/jira/browse/SPARK-27872 > Project: Spark > Issue Type: Bug > Components: Kubernetes > Affects Versions: 3.0.0, 2.4.3 > Reporter: Stavros Kontopoulos > Priority: Major > > Driver and executors use different service accounts in case the driver has > one set up which is different than default: > [https://gist.github.com/skonto/9beb5afa2ec4659ba563cbb0a8b9c4dd] > This makes the executor pods fail when the user links the driver service > account with a pull secret: > [https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account]. > Executors will not use the driver's service account and will not be able to > get the secret in order to pull the related image. > I am not sure what is the assumption here for using the default account for > executors, probably because of the fact that this account is limited (btw > executors dont create resources)? This is an inconsistency that could be > worked around with the pod template feature in Spark 3.0.0 but it breaks pull > secrets and in general I think its a bug to have it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org