> On 3 Aug 2015, at 10:05, MrJew <kouz...@gmail.com> wrote:
> 
> Hello,
> Similar to other cluster systems e.g Zookeeper,


Actually, Zookeeper supports SASL authentication of your Kerberos tokens. 

https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL

> Hazelcast. Spark has the
> problem that is protected from the outside world however anyone having
> access to the host can run a spark node without the need for authentication.
> Currently we are using Spark 1.3.1. Is there a way to enable authentication
> so only users that have the secret can run a node. Current solution involves
> configuring the job via env variable however anyone running 'ps' command can
> see it.
> 
> Regards,
> George

This is where the YARN & its kerberos support has the edge over standalone; set 
up Kerberos properly in your hadoop cluster and you get HDFS locked down, your 
spark applications running as an different user from other applications, and 
web access managed via the RM proxy. There's a terrifying amount of complexity 
going on to achieve that.

If you want to lock down a standalone cluster, then you'll have to isolate the 
cluster & rely on SSH tunnelling to only let your trusted users in. Some 
organisations do that for their Hadoop clusters anyway.



(ASF sponsored advert: I am giving a talk, Hadoop And Kerberos: the madness 
beyond the gate, At Apachecon big data EU ( 
https://apachebigdata2015.sched.org/event/a10da43d16686f049ee6e25640ee3e8b)

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to