I don't think its unfair to compare against k8s in this case. You have to 
follow the same kinds of steps as an admin provisioning a k8s compute node as 
you do an openstack compute node. The main difference I think is they make use 
of the infrastructure that was put in place by the operator, making it 
available to the user in a more friendly way, while currently we ask the user 
to manually piece together a secure path themselves utilizing back channels 
that the operator secured (consoles)

As far as console scraping, as standard a practice as it is, isn't very well 
adopted. Most folks I've seen just ignore the ssh stuff entirely and live with 
the man in the middle risk. So, while a standard, its an infrequently used one, 
IMO.

Theres a temporal issue too. Standing up a new compute node happens rarely. 
Standing up a new vm should be relatively frequent. As an operator, I'd be ok 
taking on the one time cost burden of setup of the compute nodes if I didn't 
have to worry so much about users doing bad things with ssh.

Thanks,
Kevin
________________________________________
From: Clint Byrum [[email protected]]
Sent: Monday, October 09, 2017 1:42 PM
To: openstack-dev
Subject: Re: [openstack-dev] Supporting SSH host certificates

And k8s has the benefit of already having been installed with certs that
had to get there somehow.. through a trust bootstrap.. usually SSH. ;)

Excerpts from Fox, Kevin M's message of 2017-10-09 17:37:17 +0000:
> Yeah, there is a way to do it today. it really sucks though for most users. 
> Due to the complexity of doing the task though, most users just have gotten 
> into the terrible habit of ignoring the "this host's ssh key changed" and 
> just blindly accepting the change. I kind of hate to say it this way, but 
> because of the way things are done today, OpenStack's training folks to 
> ignore man in the middle attacks. This is not good. We shouldn't just shrug 
> it off and say folks should be more careful. We should try and make the edge 
> less sharp so they are less likely to stab themselves, and later, give 
> OpenStack a bad name because OpenStack was involved.
>

I agree that we could do better.

I think there _is_ a standardized method which is to print the host
public keys to console, and scrape them out on first access.

> (Yeah, I get it is not exactly OpenStack's fault that they use it in an 
> unsafe manner. But still, if OpenStack can do something about it, it would be 
> better for everyone involved)
>

We could do better though. We could have an API for that.

> This is one thing I think k8s is doing really well. kubectl exec <pod>   uses 
> the chain of trust built up from user all the way to the pod. There isn't 
> anything manual the user has to do to secure the path. OpenStack really could 
> benefit from something similar for client to vm.
>

This is an unfair comparison. k8s is running in the user space, and as
such rides on the bootstrap trust of whatever was used to install it.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to