Hadoop default configuration aimed for user friendliness to increase adoption, 
and security can be enabled one by one.  This approach is most problematic to 
security because system can be compromised before all security features are 
turned on.  
Larry's proposal will add some safety to remind system admin if security is 
disabled.  However, reducing the number of knobs on security configs are likely 
required to make the system secure for the banner idea to work without writing 
too much guessing logic to determine if UI is secured.  Penetration test can 
provide better insights of what hasn't been secured to improve the next 
release.  Thankfully most Hadoop vendors have done this work periodically to 
help the community secure Hadoop.

There are plenty of company advertised if you want security, use Kerberos.  
This statement is not entirely true.  Kerberos makes security more difficult to 
crack for external parties, but it shouldn't be the only method to secure 
Hadoop.  When the Kerberos environment is larger than Hadoop cluster, anyone 
within Kerberos environment can access Hadoop cluster freely without 
restriction.  In large scale enterprises or some cloud vendors that sublet 
their resources, this might not be acceptable.
 
From my point of view, a secure Hadoop release must default all settings to 
localhost only and allow users to add more hosts through authorized white list 
of servers.  This will keep security perimeter in check.  All wild card ACLs 
will need to be removed or default to current user/current host only.  Proxy 
user/host ACL list must be enforced on http channels.  This is basically 
realigning the default configuration to single node cluster or firewalled 
configuration.  

Regards,
Eric

On 7/5/18, 8:24 AM, "larry mccay" <larry.mc...@gmail.com> wrote:

    Hi Steve -
    
    This is a long overdue DISCUSS thread!
    
    Perhaps the UIs can very visibly state (in red) "WARNING: UNSECURED UI
    ACCESS - OPEN TO COMPROMISE" - maybe even force a click through the warning
    to get to the page like SSL exceptions in the browser do?
    Similar tactic for UI access without SSL?
    A new AuthenticationFilter can be added to the filter chains that blocks
    API calls unless explicitly configured to be open and obvious log a similar
    message?
    
    thanks,
    
    --larry
    
    
    
    
    On Wed, Jul 4, 2018 at 11:58 AM, Steve Loughran <ste...@hortonworks.com>
    wrote:
    
    > Bitcoins are profitable enough to justify writing malware to run on Hadoop
    > clusters & schedule mining jobs: there have been a couple of incidents of
    > this in the wild, generally going in through no security, well known
    > passwords, open ports.
    >
    > Vendors of Hadoop-related products get to deal with their lockdown
    > themselves, which they often do by installing kerberos from the outset,
    > making users make up their own password for admin accounts, etc.
    >
    > The ASF releases though: we just provide something insecure out the box
    > and some docs saying "use kerberos if you want security"
    >
    > What we can do here?
    >
    > Some things to think about
    >
    > * docs explaining IN CAPITAL LETTERS why you need to lock down your
    > cluster to a private subnet or use Kerberos
    > * Anything which can be done to make Kerberos easier (?). I see there are
    > some oustanding patches for HADOOP-12649 which need review, but what else?
    >
    > Could we have Hadoop determine when it's coming up on an open network and
    > start warning? And how?
    >
    > At the very least, single node hadoop should be locked down. You shouldn't
    > have to bring up kerberos to run it like that. And for more sophisticated
    > multinode deployments, should the scripts refuse to work without kerberos
    > unless you pass in some argument like "--Dinsecure-clusters-permitted"
    >
    > Any other ideas?
    >
    >
    > ---------------------------------------------------------------------
    > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
    > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
    >
    >
    

Reply via email to