On 10/03/17 12:27, Monty Taylor wrote:
On 03/10/2017 10:59 AM, Clint Byrum wrote:
I'm curious what you (Josh) or Zane would change too.
Containers/apps/kubes/etc. have to run on computers with storage and
networks. OpenStack provides a pretty rich set of features for giving
users computers with storage on networks, and operators a way to manage
those. So I fail to see how that is svn to "cloud native"'s git. It
seems more foundational and complimentary.

I agree with this strongly.

I get frustrated really quickly when people tell me that, as a user, I
_should_ want something different than what I actually do want.

It turns out, I _do_ want computers - or at least things that behave
like computers - for some percentage of my workloads. I'm not the only
one out there who wants that.

There are other humans out there who do not want computers. They don't
like computers at all. They want generic execution contexts into which
they can put their apps.

It's just as silly to tell those people that they _should_ want
computers as it is to tell me that I _should_ want a generic execution
context.

I totally agree with you. There is room for all kinds of applications on OpenStack, and the great thing about an open community is that they can all have a voice. In theory.

One of the wonderful things about the situation we're in now is that if
you stack a k8s on top of an OpenStack then you empower the USER to
decide what their workload is and which types of features it is - rather
than forcing a "cloud native" vision dreamed up by "thought leaders"
down everyone's throats.

I'd go even further than that - many workloads are likely a mix of _both_, and we need to empower users to be able to use the right tools for the right parts of the job and _integrate_ them together. That's where OpenStack can add huge value to k8s and the like.

You may be familiar with the Kuryr project, which integrates Kubernetes deployments made by Magnum with Neutron networking so that other Nova servers can talk directly to the containers and other fun stuff. IMHO it's exactly the kind of thing OpenStack should be doing to make users' lives better, and give a compelling reason to install k8s on top of OpenStack instead of on bare metal.

So here's a fun thing I learned at the PTG: according to the Magnum folks, the main thing preventing them from fully adopting Kuryr is that the k8s application servers, provisioned with Nova, need to make API calls to Neutron to set up the ports as containers move around. And there's no secure way to give Keystone authentication credentials to an application server to do what it needs - and, especially, to do *only* what it needs.

http://lists.openstack.org/pipermail/openstack-dev/2016-October/105304.html

Keystone devs did agree in back in Austin that when they rejigger the default policy files it will done in such a way as to make the authorisation component of this feasible (by requiring a specific reader/writer role, not just membership of the project, to access APIs), but that change hasn't happened yet AFAIK. I suspect that it isn't their top priority. Kevin has been campaigning for *years* to get Nova to provide a secure way to inject credentials into a server in the same way that this is built in to EC2, GCE and (I assume but haven't checked) Azure. And they turned him down flat every time saying that this was not Nova's problem.

Sorry, but if OpenStack isn't a good, secure platform for running Kubernetes on then that is a HAIR ON FIRE-level *existential* problem in 2017.

We can't place too much blame on individual projects though, because I believe the main reason this doesn't Just Work already is that there has been an unspoken consensus that we needed to listen to users like you but not to users like Kevin, and the elected leaders of our community have done nothing to either disavow or officially adopt that consensus. We _urgently_ need to decide if that's what we actually want and make sure it is prominently documented so that both users and developers know what's what.

FWIW I'm certain you must have hit this same issue in infra - probably you were able to use pre-signed Swift URLs when uploading log files to avoid needing credentials on servers allocated by nodepool? That's great, but not every API in OpenStack has a pre-signed URL facility, and nor should they have to.

(BTW I proposed a workaround for Magnum/Kuryr at the PTG by using a pre-signed Zaqar URL with a subscription triggering a Mistral workflow, and I've started working on a POC.)

It also turns out that the market agrees. Google App Engine was not
successful, until Google added IaaS. Azure was not successful until
Microsoft added IaaS. Amazon, who have a container service and a
serverless service is all built around the ecosystem that is centered on
... that's right ... an IaaS.

I think this is the point where you're supposed to insert the disclaimer that, in any market, past performance is not a guarantee of future results ;)

So rather than us trying to chase a thing we're not (we're not a
container or thin-app orchestration tool) - being comfortable with our
actual identity (IaaS provider of computers) and working with other
people who do other things ends up providing _users_ with the a real win.

k8s is an application that can run in Nova servers, and things we can do to make OpenStack a better host for k8s will also make it a better host for other kinds of computer-wanting applications, and vice-versa. There's no conflict there.

Considering computers as some-how inherently 'better' or 'worse' than
some of the 'cloud-native' concepts is hog wash. Different users have
different needs. As Clint points out - kubernetes needs to run
_somewhere_. CloudFoundry needs to run _somewhere_. So those are at
least two other potential users who are not me and my collection of
things I want to run that want to run in computers.

I think we might be starting to talk about different ideas. The whole VMs vs. containers fight _is_ hogwash. You're right to call it out as such. We hear far too much about it, and it's totally unfair to the folks who work on the VM side. But that isn't what this discussion is about.

Google has done everyone a minor disservice by appropriating the term "cloud-native" and using it in a context such that it's effectively been redefined to mean "containers instead of VMs". I've personally stopped using the term because it's more likely to generate confusion than clarity.

What "cloud-native" used to mean to me was an application that knows it's running in the cloud, and uses the cloud's APIs. As opposed to applications that could just as easily be running in a VPS or on bare metal, but happen to be running in a VM provisioned by Nova.

So where in the past an "application" was just a software package to run on some computer and the racking, patching, configuring of switches, scheduling of backups &c. was all done manually; now we have APIs for all of those things. Cloud-native applications actually make use of those APIs. Why should k8s and CloudFoundry have to act like they're running on old-school physical infrastructure? Why shouldn't running on OpenStack make them _better_? Why shouldn't they be able to scale themselves out? Replace broken servers? And so on.

http://www.zerobanana.com/archive/2017/01/23#what-are-clouds

And, briefly veering back on-topic, if we want those deployments to be reproducible (we do) then that also inevitably means new packaging formats to take into account the extra information about the infrastructure and external services that is never going to be captured e.g. inside a container image. (BTW comparing the app catalog to Juju would be much more productive than comparing it to Docker Hub.)

cheers,
Zane.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to