[ https://issues.apache.org/jira/browse/SPARK-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14135689#comment-14135689 ]
Patrick Wendell commented on SPARK-3542: ---------------------------------------- Hey James. Are you using the standalone scheduler, mesos, or YARN? In YARN mode users shouldn't be setting spark.authenticate.secret manually, the cluster will do it for you. In the other modes, the original design was that you'd set this on the driver and the workers separately, but we don't have good documentation for this and it's not well tested. For instance, if we see spark.authenticate.secret set on the driver, we should not transfer that to the executor int he same way as other spark options. Let me know what you are doing, then I can update the JIRA a bit. I'm guessing the key need here is just to have a well documented way of distributing the secret for the standalone/mesos master and worker. > Akka protocol authentication in plaintext > ----------------------------------------- > > Key: SPARK-3542 > URL: https://issues.apache.org/jira/browse/SPARK-3542 > Project: Spark > Issue Type: Bug > Affects Versions: 1.1.0 > Reporter: James Livingston > > It is already noted in the SecurityManager API docs but when using the Akka > communication protocol, SSL is not currently supported and credentials can > (and often are) passed in plaintext. > Using one of the examples, you can add this and see "password" sent in > plaintext via the akka.tcp protocol: > conf.set("spark.authenticate", "true") > conf.set("spark.authenticate.secret", "password") > It's obviously known, but worth having a jira to track. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org