[ https://issues.apache.org/jira/browse/SPARK-6017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337855#comment-14337855 ]
Reynold Xin commented on SPARK-6017: ------------------------------------ Seems reasonable. No need to implement a whole new RPC layer. The network module actually already implements a lower level RPC interface. Once SPARK-5124 is done, we can create an alternative implementation based on the network module's. I created SPARK-6028 to track it. > Provide transparent secure communication channel on Yarn > -------------------------------------------------------- > > Key: SPARK-6017 > URL: https://issues.apache.org/jira/browse/SPARK-6017 > Project: Spark > Issue Type: Umbrella > Components: YARN > Reporter: Marcelo Vanzin > Attachments: secure_spark_on_yarn.pdf > > > A quick description: > Currently driver and executors communicate through an insecure channel, so > anyone can listen on the network and see what's going on. That prevents Spark > from adding some features securely (e.g. SPARK-5342, SPARK-5682) without > resorting to using internal Hadoop APIs. > Spark 1.3.0 will add SSL support, but properly configuring SSL is not a > trivial task for operators, let alone users. > In light of those, we should add a more transparent secure transport layer. > I've written a short spec to identify the areas in Spark that need work to > achieve this, and I'll attach the document to this issue shortly. > Note I'm restricting things to Yarn currently, because as far as I know it's > the only cluster manager that provides the needed security features to > bootstrap the secure Spark transport. The design itself doesn't really rely > on Yarn per se, just on a secure way to distribute the initial secret (which > the Yarn/HDFS combo provides). -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org