[openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Monty Taylor

Hey everybody!

https://etherpad.openstack.org/p/sydney-forum-multi-cloud-management

I've CC'd everyone who listed interest directly, just in case you're not 
already on the openstack-dev list. If you aren't, and you are in fact 
interested in this topic, please subscribe and make sure to watch for 
[oaktree] subject headings.


We had a great session in Sydney about the needs of managing resources 
across multiple clouds. During the session I pointed out the work that 
had been started in the Oaktree project [0][1] and offered that if the 
people who were interested in the topic thought we'd make progress best 
by basing the work on oaktree, that we should bootstrap a new core team 
and kick off some weekly meetings. This is, therefore, the kickoff email 
to get that off the ground.


All of the below is thoughts from me and a description of where we're at 
right now. It should all be considered up for debate, except for two things:


- gRPC API
- backend implementation based on shade

As those are the two defining characteristics of the project. For those 
who weren't in the room, justifications for those two characteristics are:


gRPC API


There are several reasons why gRPC.

* Make it clear this is not a competing REST API.

OpenStack has a REST API already. This is more like a 'federation' API 
that knows how to talk to one or more clouds (similar to the kubernetes 
federation API)


* Streaming and async built in

One of the most costly things in using the OpenStack API is polling. 
gRPC is based on HTTP/2 and thus supports streaming and other exciting 
things. This means an oaktree running in or on a cloud can do its 
polling loops over the local network and the client can just either wait 
on a streaming call until the resource is ready, or can fire an async 
call and deal with it later on a notification channel.


* Network efficiency

Protobuf over HTTP/2 is a super-streamlined binary protocol, which 
should actually be really  nice for our friends in Telco land who are 
using OpenStack for Edge-related tasks in 1000s of sites. All those 
roundtrips add up at scale.


* Multi-language out of the box

gRPC allows us to directly generate consistent consumption libs for a 
bunch of languages - or people can grab the proto files and integrate 
those into their own build if they prefer.


* The cool kids are doing it

To be fair, Jay Pipes and I tried to push OpenStack to use Protobuf 
instead of JSON for service-to-service communication back in 2010 - so 
it's not ACTUALLY a new idea... but with Google pushing it and support 
from the CNCF, gRPC is actually catching on broadly. If we're writing a 
new thing, let's lean forward into it.


Backend implementation in shade
---

If the service is defined by gRPC protos, why not implement the service 
itself in Go or C++?


* Business logic to deal with cloud differences

Adding a federation API isn't going to magically make all of those 
clouds work the same. We've got that fairly well sorted out in shade and 
would need to reimplement basically all of shade in other other language.


* shade is battle tested at scale

shade is what Infra's nodepool uses. In terms of high-scale API 
consumption, we've learned a TON of lessons. Much of the design inside 
of shade is the result of real-world scaling issues. It's Open Source, 
so we could obviously copy all of that elsewhere - but why? It exists 
and it works, and oaktree itself should be a scale-out shared-nothing 
kind of service anyway.


The hard bits here aren't making API calls to 3 different clouds, the 
hard bits are doing that against 3 *different* clouds and presenting the 
results sanely and consistently to the original user.


Proposed Structure
==

PTL
---

As the originator of the project, I'll take on the initial PTL role. 
When the next PTL elections roll around, we should do a real election.


Initial Core Team
-

oaktree is still small enough that I don't think we need to be super 
protective - so I think if you're interested in working on it and you 
think you'll have the bandwidth to pay attention, let me know and I'll 
add you to the team.


General rules of thumb I try to follow on top of normal OpenStack 
reviewing guidelines:


* Review should mostly be about suitability of design/approach. Style 
issues should be handled by pep8/hacking (with one exception, see 
below). Functional issues should be handled with tests. Let the machines 
be machines and humans be humans.


* Use followup patches to fix minor things rather than causing an 
existing patch to get re-spun and need to be re-reviewed.


The one style exception ... I'm a big believer in not using visual 
indentation - but I can't seem to get pep8 or hacking to complain about 
its use. This isn't just about style - visual indentation causes more 
lines to be touched during a refactor than are necessary making the 
impact of a change harder to see.


good:


Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Jeremy Stanley
On 2017-11-28 15:20:10 -0600 (-0600), Monty Taylor wrote:
[...]
> To be fair, Jay Pipes and I tried to push OpenStack to use
> Protobuf instead of JSON for service-to-service communication back
> in 2010 - so it's not ACTUALLY a new idea... but with Google
> pushing it and support from the CNCF, gRPC is actually catching on
> broadly. If we're writing a new thing, let's lean forward into it.
[...]

Not to be "that guy" but... why not ASN.1? Sure the new shiny wore
off decades ago, but it's more broadly supported even than PB and
had all that extra time to get the corner cases cleared of cobwebs.
I know this particular train has already sailed, but PB just always
struck me as though it were either designed by someone who had no
idea the same problems had already been solved long ago, or who
didn't (want to be bothered to) read the X.690 spec.

/me goes back to shaking a cane at the kids on his lawn

-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Joshua Harlow

So just curious.

I didn't think shade had any federation logic in it; so I assume it will 
start getting some?


Has there been any prelim. design around what the APIs of this would be 
and how they would work and how they would return data from X other 
clouds in a uniform manner? (I'd really be interested in how a high 
level project is going to combine various resources from other clouds in 
a way that doesn't look like crap).


Will this thing also have its own database (or something like a DB)?

I can imagine if there is a `create_many_servers` call in oaktree that 
it will need to have some sort of lock taken by the process doing this 
set of XYZ calls (in the right order) so that some other 
`create_many_servers` call doesn't come in and screw everything the 
prior one up... Or maybe cross-cloud consistency issues aren't a 
concern... What's the thoughts here?


What happens in the above if a third user Y is creating resources in one 
of those clouds outside the view of oaktree... ya da ya da... What 
happens if they are both targeting the same tenant...


Perhaps a decent idea to start some kind of etherpad to start listing 
these questions (and at least think about them a wee bit) down?


Monty Taylor wrote:

Hey everybody!

https://etherpad.openstack.org/p/sydney-forum-multi-cloud-management

I've CC'd everyone who listed interest directly, just in case you're not
already on the openstack-dev list. If you aren't, and you are in fact
interested in this topic, please subscribe and make sure to watch for
[oaktree] subject headings.

We had a great session in Sydney about the needs of managing resources
across multiple clouds. During the session I pointed out the work that
had been started in the Oaktree project [0][1] and offered that if the
people who were interested in the topic thought we'd make progress best
by basing the work on oaktree, that we should bootstrap a new core team
and kick off some weekly meetings. This is, therefore, the kickoff email
to get that off the ground.

All of the below is thoughts from me and a description of where we're at
right now. It should all be considered up for debate, except for two
things:

- gRPC API
- backend implementation based on shade

As those are the two defining characteristics of the project. For those
who weren't in the room, justifications for those two characteristics are:

gRPC API


There are several reasons why gRPC.

* Make it clear this is not a competing REST API.

OpenStack has a REST API already. This is more like a 'federation' API
that knows how to talk to one or more clouds (similar to the kubernetes
federation API)

* Streaming and async built in

One of the most costly things in using the OpenStack API is polling.
gRPC is based on HTTP/2 and thus supports streaming and other exciting
things. This means an oaktree running in or on a cloud can do its
polling loops over the local network and the client can just either wait
on a streaming call until the resource is ready, or can fire an async
call and deal with it later on a notification channel.

* Network efficiency

Protobuf over HTTP/2 is a super-streamlined binary protocol, which
should actually be really nice for our friends in Telco land who are
using OpenStack for Edge-related tasks in 1000s of sites. All those
roundtrips add up at scale.

* Multi-language out of the box

gRPC allows us to directly generate consistent consumption libs for a
bunch of languages - or people can grab the proto files and integrate
those into their own build if they prefer.

* The cool kids are doing it

To be fair, Jay Pipes and I tried to push OpenStack to use Protobuf
instead of JSON for service-to-service communication back in 2010 - so
it's not ACTUALLY a new idea... but with Google pushing it and support
from the CNCF, gRPC is actually catching on broadly. If we're writing a
new thing, let's lean forward into it.

Backend implementation in shade
---

If the service is defined by gRPC protos, why not implement the service
itself in Go or C++?

* Business logic to deal with cloud differences

Adding a federation API isn't going to magically make all of those
clouds work the same. We've got that fairly well sorted out in shade and
would need to reimplement basically all of shade in other other language.

* shade is battle tested at scale

shade is what Infra's nodepool uses. In terms of high-scale API
consumption, we've learned a TON of lessons. Much of the design inside
of shade is the result of real-world scaling issues. It's Open Source,
so we could obviously copy all of that elsewhere - but why? It exists
and it works, and oaktree itself should be a scale-out shared-nothing
kind of service anyway.

The hard bits here aren't making API calls to 3 different clouds, the
hard bits are doing that against 3 *different* clouds and presenting the
results sanely and consistently to the original user.

Proposed Structure
==

PTL
---

As the originator 

Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Monty Taylor

On 11/28/2017 06:05 PM, Joshua Harlow wrote:
> So just curious.
>
> I didn't think shade had any federation logic in it; so I assume it will
> start getting some?

It's possible that we're missing each other on the definition of the 
word 'federation' ... but shade's entire purpose in life is to allow 
sane use of multiple clouds from the same application.


> Has there been any prelim. design around what the APIs of this would be
> and how they would work and how they would return data from X other
> clouds in a uniform manner? (I'd really be interested in how a high
> level project is going to combine various resources from other clouds in
> a way that doesn't look like crap).

(tl;dr - yes)

Ah - I grok what you're saying now. Great question!

There are (at least) four sides to this.

* Creating a resource in a specific location (boot a VM in OVH BHS1)
* Fetching resources from a specific location (show me the image in 
vexxhost)


* Creating a resource everywhere (upload an image to all cloud regions)
* Fetching resources from all locations (show me all my VMs)

The first two are fully handled, as you might imagine, although the 
mechanism is slightly different in shade and oaktree (I'll get back to 
that in a sec)


Creating everywhere isn't terribly complex - when I need to do that 
today it's a simple loop:


  for cloud in shade.openstack_clouds():
cloud.create_image('my-image', filename='my-image.qcow2')

But we can (and should and will) add some syntactic sugar to make that 
easier. Like (*waving hands*)


  all_clouds = shade.everwhere()
  all_clouds.create_image('my-image', filename='my-image.qcow2')

It's actually more complex than that, because Rackspace wants a VHD and 
OVH wants a RAW but can take a qcow2 as well... but this is an email, so 
for now let's assume that we can handle the general 'create everywhere' 
with a smidge of meta programming, some explicit overrides for the 
resources that need extra special things - and probably something like 
concurrent.futures.ThreadPoolExecutor.


The real fun, as you hint at, comes when we want to read from everywhere.

To prep for this (and inspired specifically be this use-case), shade now 
adds a "location" field to every resource it returns. That location 
field contains cloud, region, domain and project information - so that 
in a list of server objects from across 14 regions of 6 clouds all the 
info about who and what they are is right there in the object.


When we shift to the oaktree gRPC interface, we carry over the Location 
concept:



http://git.openstack.org/cgit/openstack/oaktreemodel/tree/oaktreemodel/common.proto#n31

which we keep on all of the resources:


http://git.openstack.org/cgit/openstack/oaktreemodel/tree/oaktreemodel/image.proto#n49

So listing all the things should work the same way as the above 
list-from-everywhere method.


The difference I mentioned earlier in how shade and oaktree present the 
location interface is that in shade there is a an OpenStackCloud object 
per cloud-region, and as a user you select which cloud you operate on 
via instantiating an OpenStackCloud pointed at the right thing. We need 
to add the AllTheClouds meta object for the shade interface.


In oaktree, there is the one oaktree instance and it contains 
information about all of your cloud-regions, so Locations and Filters 
become a parameters on operations.


> Will this thing also have its own database (or something like a DB)?

It's an open question. Certainly not at the moment or in the near future 
- there's no need for one, as the constituent OpenStack clouds are the 
actual source of truth, the thing we need is caching rather than data 
that is canonical itself.


This will almost certainly change as we work on the auth story, but the 
specifics of that are ones that need to be sorted out collectively - 
preferably with operators involved.


> I can imagine if there is a `create_many_servers` call in oaktree that
> it will need to have some sort of lock taken by the process doing this
> set of XYZ calls (in the right order) so that some other
> `create_many_servers` call doesn't come in and screw everything the
> prior one up... Or maybe cross-cloud consistency issues aren't a
> concern... What's the thoughts here?
That we have already, actually, and you've even landed code in it. :) 
shade executes all of its remote operations through a TaskManager. The 
default one that you get if you're just running some ansible is a 
pass-through. However, in nodepool we have a multi-threaded 
rate-limiting TaskManager that ensures that we're only ever doing one 
operation at a time for a given cloud-region, and that we're keeping 
ourselves inside of a configurable rate limit (learned the hard-way from 
crashing a few public clouds)


It's worth noting that shade is not transactional (although there are a 
few places where, if shade created a resource on the user's behalf that 
the user doesn't know about, it will delete it on error) So

Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Joshua Harlow

Monty Taylor wrote:

On 11/28/2017 06:05 PM, Joshua Harlow wrote:
 > So just curious.
 >
 > I didn't think shade had any federation logic in it; so I assume it will
 > start getting some?

It's possible that we're missing each other on the definition of the
word 'federation' ... but shade's entire purpose in life is to allow
sane use of multiple clouds from the same application.


Ya I think u got it, shade is like what I would call the rubber hits the 
road part of federation; so it will be interesting to see how such 
rubber can be used to build what I would call the higher level 
federation (without screwing it up, lol).




 > Has there been any prelim. design around what the APIs of this would be
 > and how they would work and how they would return data from X other
 > clouds in a uniform manner? (I'd really be interested in how a high
 > level project is going to combine various resources from other clouds in
 > a way that doesn't look like crap).

(tl;dr - yes)

Ah - I grok what you're saying now. Great question!

There are (at least) four sides to this.

* Creating a resource in a specific location (boot a VM in OVH BHS1)
* Fetching resources from a specific location (show me the image in
vexxhost)

* Creating a resource everywhere (upload an image to all cloud regions)
* Fetching resources from all locations (show me all my VMs)

The first two are fully handled, as you might imagine, although the
mechanism is slightly different in shade and oaktree (I'll get back to
that in a sec)

Creating everywhere isn't terribly complex - when I need to do that
today it's a simple loop:

for cloud in shade.openstack_clouds():
cloud.create_image('my-image', filename='my-image.qcow2')


Ya, scatter/gather (with some kind of new grpc streaming response..)



But we can (and should and will) add some syntactic sugar to make that
easier. Like (*waving hands*)

all_clouds = shade.everwhere()
all_clouds.create_image('my-image', filename='my-image.qcow2')


Might as well just start to call it scatter/gather, lol



It's actually more complex than that, because Rackspace wants a VHD and
OVH wants a RAW but can take a qcow2 as well... but this is an email, so
for now let's assume that we can handle the general 'create everywhere'
with a smidge of meta programming, some explicit overrides for the
resources that need extra special things - and probably something like
concurrent.futures.ThreadPoolExecutor.

The real fun, as you hint at, comes when we want to read from everywhere.

To prep for this (and inspired specifically be this use-case), shade now
adds a "location" field to every resource it returns. That location
field contains cloud, region, domain and project information - so that
in a list of server objects from across 14 regions of 6 clouds all the
info about who and what they are is right there in the object.

When we shift to the oaktree gRPC interface, we carry over the Location
concept:


http://git.openstack.org/cgit/openstack/oaktreemodel/tree/oaktreemodel/common.proto#n31


which we keep on all of the resources:


http://git.openstack.org/cgit/openstack/oaktreemodel/tree/oaktreemodel/image.proto#n49


So listing all the things should work the same way as the above
list-from-everywhere method.

The difference I mentioned earlier in how shade and oaktree present the
location interface is that in shade there is a an OpenStackCloud object
per cloud-region, and as a user you select which cloud you operate on
via instantiating an OpenStackCloud pointed at the right thing. We need
to add the AllTheClouds meta object for the shade interface.

In oaktree, there is the one oaktree instance and it contains
information about all of your cloud-regions, so Locations and Filters
become a parameters on operations.

 > Will this thing also have its own database (or something like a DB)?

It's an open question. Certainly not at the moment or in the near future
- there's no need for one, as the constituent OpenStack clouds are the
actual source of truth, the thing we need is caching rather than data
that is canonical itself.


That's fine, it prob only becomes a problem if there is a need for some 
kind of cross cloud consistency requirements (which ideally this whole 
thing would strongly avoid)




This will almost certainly change as we work on the auth story, but the
specifics of that are ones that need to be sorted out collectively -
preferably with operators involved.

 > I can imagine if there is a `create_many_servers` call in oaktree that
 > it will need to have some sort of lock taken by the process doing this
 > set of XYZ calls (in the right order) so that some other
 > `create_many_servers` call doesn't come in and screw everything the
 > prior one up... Or maybe cross-cloud consistency issues aren't a
 > concern... What's the thoughts here?
That we have already, actually, and you've even landed code in it. :)
shade executes all of its remote operations through a TaskManager. The
default one that you get if

Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Joshua Harlow

Small side-question,

Why would this just be limited to openstack clouds?

Would it be?

Monty Taylor wrote:

Hey everybody!

https://etherpad.openstack.org/p/sydney-forum-multi-cloud-management

I've CC'd everyone who listed interest directly, just in case you're not
already on the openstack-dev list. If you aren't, and you are in fact
interested in this topic, please subscribe and make sure to watch for
[oaktree] subject headings.

We had a great session in Sydney about the needs of managing resources
across multiple clouds. During the session I pointed out the work that
had been started in the Oaktree project [0][1] and offered that if the
people who were interested in the topic thought we'd make progress best
by basing the work on oaktree, that we should bootstrap a new core team
and kick off some weekly meetings. This is, therefore, the kickoff email
to get that off the ground.

All of the below is thoughts from me and a description of where we're at
right now. It should all be considered up for debate, except for two
things:

- gRPC API
- backend implementation based on shade

As those are the two defining characteristics of the project. For those
who weren't in the room, justifications for those two characteristics are:

gRPC API


There are several reasons why gRPC.

* Make it clear this is not a competing REST API.

OpenStack has a REST API already. This is more like a 'federation' API
that knows how to talk to one or more clouds (similar to the kubernetes
federation API)

* Streaming and async built in

One of the most costly things in using the OpenStack API is polling.
gRPC is based on HTTP/2 and thus supports streaming and other exciting
things. This means an oaktree running in or on a cloud can do its
polling loops over the local network and the client can just either wait
on a streaming call until the resource is ready, or can fire an async
call and deal with it later on a notification channel.

* Network efficiency

Protobuf over HTTP/2 is a super-streamlined binary protocol, which
should actually be really nice for our friends in Telco land who are
using OpenStack for Edge-related tasks in 1000s of sites. All those
roundtrips add up at scale.

* Multi-language out of the box

gRPC allows us to directly generate consistent consumption libs for a
bunch of languages - or people can grab the proto files and integrate
those into their own build if they prefer.

* The cool kids are doing it

To be fair, Jay Pipes and I tried to push OpenStack to use Protobuf
instead of JSON for service-to-service communication back in 2010 - so
it's not ACTUALLY a new idea... but with Google pushing it and support
from the CNCF, gRPC is actually catching on broadly. If we're writing a
new thing, let's lean forward into it.

Backend implementation in shade
---

If the service is defined by gRPC protos, why not implement the service
itself in Go or C++?

* Business logic to deal with cloud differences

Adding a federation API isn't going to magically make all of those
clouds work the same. We've got that fairly well sorted out in shade and
would need to reimplement basically all of shade in other other language.

* shade is battle tested at scale

shade is what Infra's nodepool uses. In terms of high-scale API
consumption, we've learned a TON of lessons. Much of the design inside
of shade is the result of real-world scaling issues. It's Open Source,
so we could obviously copy all of that elsewhere - but why? It exists
and it works, and oaktree itself should be a scale-out shared-nothing
kind of service anyway.

The hard bits here aren't making API calls to 3 different clouds, the
hard bits are doing that against 3 *different* clouds and presenting the
results sanely and consistently to the original user.

Proposed Structure
==

PTL
---

As the originator of the project, I'll take on the initial PTL role.
When the next PTL elections roll around, we should do a real election.

Initial Core Team
-

oaktree is still small enough that I don't think we need to be super
protective - so I think if you're interested in working on it and you
think you'll have the bandwidth to pay attention, let me know and I'll
add you to the team.

General rules of thumb I try to follow on top of normal OpenStack
reviewing guidelines:

* Review should mostly be about suitability of design/approach. Style
issues should be handled by pep8/hacking (with one exception, see
below). Functional issues should be handled with tests. Let the machines
be machines and humans be humans.

* Use followup patches to fix minor things rather than causing an
existing patch to get re-spun and need to be re-reviewed.

The one style exception ... I'm a big believer in not using visual
indentation - but I can't seem to get pep8 or hacking to complain about
its use. This isn't just about style - visual indentation causes more
lines to be touched during a refactor than are necessary making the
im

Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-29 Thread Monty Taylor

On 11/28/2017 07:14 PM, Joshua Harlow wrote:

Small side-question,

Why would this just be limited to openstack clouds?

>

Would it be?


That's a great question. I think, at least for now, attempting to 
support non-OpenStack clouds would be too much and would cause us to 
have a thing that tries to solve all the problems and ends up solving 
none of them.


The problem is that, as much as there are deployer differences between 
OpenStack clouds, papering over them isn't THAT bad from an interface 
perspective, since the fundamental concepts are all the same.


Once you add non-OpenStack clouds, you have to deal with the extreme 
impedence mismatch between core concepts, and the use of similar names 
for different things.


For instance - an OpenStack Availability Zone and an AWS AZ are **not** 
the same thing. So you'd either need to use a different word mapped to 
each one (which would confuse either OpenStack or AWS users) or you'd 
have an oaktree concept mean different things depending on which cloud 
happened to be there.


All that said - I don't think there's anything architecturally that 
would prevent such work from happening- I just think it's fraught with 
peril and unlikely to be super successful and that we should focus on 
making sure OpenStack users can consume multi-cloud and 
multi-cloud-region sanely. Then, once we're happy with that and have 
served the needs of our OpenStack users, if someone comes up with a plan 
that adds support for non-OpenStack backend drivers for oaktree in a way 
that does not make life worse for the OpenStack users - then why not.



Monty Taylor wrote:

Hey everybody!

https://etherpad.openstack.org/p/sydney-forum-multi-cloud-management

I've CC'd everyone who listed interest directly, just in case you're not
already on the openstack-dev list. If you aren't, and you are in fact
interested in this topic, please subscribe and make sure to watch for
[oaktree] subject headings.

We had a great session in Sydney about the needs of managing resources
across multiple clouds. During the session I pointed out the work that
had been started in the Oaktree project [0][1] and offered that if the
people who were interested in the topic thought we'd make progress best
by basing the work on oaktree, that we should bootstrap a new core team
and kick off some weekly meetings. This is, therefore, the kickoff email
to get that off the ground.

All of the below is thoughts from me and a description of where we're at
right now. It should all be considered up for debate, except for two
things:

- gRPC API
- backend implementation based on shade

As those are the two defining characteristics of the project. For those
who weren't in the room, justifications for those two characteristics 
are:


gRPC API


There are several reasons why gRPC.

* Make it clear this is not a competing REST API.

OpenStack has a REST API already. This is more like a 'federation' API
that knows how to talk to one or more clouds (similar to the kubernetes
federation API)

* Streaming and async built in

One of the most costly things in using the OpenStack API is polling.
gRPC is based on HTTP/2 and thus supports streaming and other exciting
things. This means an oaktree running in or on a cloud can do its
polling loops over the local network and the client can just either wait
on a streaming call until the resource is ready, or can fire an async
call and deal with it later on a notification channel.

* Network efficiency

Protobuf over HTTP/2 is a super-streamlined binary protocol, which
should actually be really nice for our friends in Telco land who are
using OpenStack for Edge-related tasks in 1000s of sites. All those
roundtrips add up at scale.

* Multi-language out of the box

gRPC allows us to directly generate consistent consumption libs for a
bunch of languages - or people can grab the proto files and integrate
those into their own build if they prefer.

* The cool kids are doing it

To be fair, Jay Pipes and I tried to push OpenStack to use Protobuf
instead of JSON for service-to-service communication back in 2010 - so
it's not ACTUALLY a new idea... but with Google pushing it and support
from the CNCF, gRPC is actually catching on broadly. If we're writing a
new thing, let's lean forward into it.

Backend implementation in shade
---

If the service is defined by gRPC protos, why not implement the service
itself in Go or C++?

* Business logic to deal with cloud differences

Adding a federation API isn't going to magically make all of those
clouds work the same. We've got that fairly well sorted out in shade and
would need to reimplement basically all of shade in other other language.

* shade is battle tested at scale

shade is what Infra's nodepool uses. In terms of high-scale API
consumption, we've learned a TON of lessons. Much of the design inside
of shade is the result of real-world scaling issues. It's Open Source,
so we could obviously copy all of tha

Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-29 Thread Jay Pipes

On 11/29/2017 11:26 AM, Monty Taylor wrote:
For instance - an OpenStack Availability Zone and an AWS AZ are **not** 
the same thing.


This deserves repeating every week on the OpenStack mailing list.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev