On Wed, Apr 16, 2014 at 09:45:55PM -0600, Yves Dorfsman wrote:
> On 2014-04-16 21:14, Matt Okeson-Harlow wrote:
> >
> >If it is taking a long time to push pubkeys out, is this possibly due to the
> >number of forks?  Before 1.3 I believe the default was 5.
> >
> >Are keys pushed out as part of a master 'Do all the things' update, or are
> >they tagged so that something like 'ansible-playbook site.yml -t pubkeys'
> >pushes everything out?  If so, a cron job and/or onboarding checklist should
> >be able to take care of it.  If new/temporary servers are not getting added 
> >to
> >your ansible ( or any config management ) configuration... how are they being
> >configured?
> 
> The problem isn't with ansible, but with the model:
> 
> We have a very dynamic environment, any developer can create a VM in
> AWS, or indeed an entire environment, that's not a bad thing, that's
> a good thing, that's part of the advantages of using VMs.
> 
> Ansible is only run against a given set of VMs (an environment) when
> somebody does a deploy, or re-install software. A small VM that runs
> a tiny service that doesn't need to be upgraded in a long time could
> be left alone for a long time (no ansible run against it for a long
> time), and when one of the new guys needs to access it, their key
> won't be on it, or worse, if somebody quits, their key won't be
> cleaned up.
> 
> I've gone around and around in circle with this, I think our best
> bet, for our environment, is using sshd's AuthorizedKeysCommand and
> curl'ing a list of public key from a secure S3 repo (from a
> different account that only very few sysadmin have access to). The
> only risks I can think of are:
> 
> - The key for the S3 account getting compromised.
>   This is basically the same risk as one of the key to the general
> AWS account being compromised, so we're not worse off here.

(NB: We're a Puppet shop presently, hope this isn't completely off base)
If they're _public_ keys, can you store only the public keys on S3,
so it's less of an issue if compromised. Of course the keyserver's

private key would remain in place. Append the keyserver's public
key to newserver:/etc/ssh/ssh_known_hosts, and to make it harder
to spoof the DNS set newserver:/etc//ssh/ssh_config:

   CheckHostIP yes
To confirm the satellites speak to the genuine keyserver set:
   UserKnownHostsFile /etc/ssh/ssh_known_hosts
We also centrally control and distribute the various authorized_keys
files.

> - The "web server" serving the public keys being compromised.
>   With S3 this is very unlikely, if anything it is less likely than
> the current scenario where any dev laptop is out on the internet and
> could be compromised, or using our own LDAP server, with the
> potential of not securing it properly. I doubt the S3 servers are
> accessible from the internet, I'm assuming there is an entire team
> making sure this is not going to happen.
> 
> - DNS being hijacked (then somebody impersonate S3, and push their keys on 
> us).
>   These are AWS instances, getting DNS from an Amazon DNS server
> over Amazon's private ip network. Again very unlikely, less likely
> than our current scenario.
> 
> If I missed some likely vector of attack, I'd love to hear about it.
> 
> -- 
> Yves.

_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to