I can give a high level of what I feel is a reasonably common use case.

I have infrastructure in two primary locations; AWS, and MAAS (at the local
datacenter). The nodes at the datacenter have a direct fiber route via
virtual private gateway in us-west-2, and the instances in AWS/us-west-2
have a direct route  via the VPG to the private MAAS networks at the
datacenter. There is no charge for data transfer from the datacenter in and
out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
and have the AWS instances and MAAS instances talk to each other via
private address.

At the application level, the component/config goes something like this:

The MAAS nodes at the data center have mgmt-net, cluster-net, and
access-net, interfaces defined, all of which get ips from their respective
address spaces from the datacenter MAAS.

I need my elasticsearch charm to configure elasticsearch such that
elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
instance) -> elasticsearch to talk across the correct space for the AWS
instance, and the access-net space for the MAAS instance (I'm thinking this
is where bindings and '--via' might come in handy).

(I know the openstack charms have to make similar network mitigation, for
which they use the bindings, I must just be looking at it backwards, or not
looking into network bindings which are the key here I think)

For example, my web server charm in AWS will be deployed to a NAT
space/subnet, and will only get a private ip from the AWS subnet. It needs
to give the ip to elasticsearch (deployed in MAAS), and to a loadbalancer
(deploy to different model and space in the same AWS VPC) - this all seems
like there should be no issues with getting it to happen because the web
server charm only has a single ip address to be handing out, but what I'm
after here is a consistent way to be able to retrieve this information at
the charm level - but I think what you are telling me is that if I use the
functionality correctly, then I won't have to do any mitigating at the
charm/network-get level.

Looks like I need to take a deeper dive into the network bindings at the
charm level and see how that functionality fits into the bigger picture to
make the whole picture make sense.

Thanks



> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about
> public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
>
> There's some doc here to explain things in more detail
>
> https://jujucharms.com/docs/master/developer-network-primitives
>
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
>
> Depending on how the charm has been deployed, and more specifically
> whether it
> is in a cross model relation, the ingress address might be either the
> public or
> private address. Juju will decide based on a number of factors (whether
> models
> are deployed to same region, vpc, other provider specific aspects) and
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the
> public
> address (if there is one) for the ingress value for cross model relations
> - the
> algorithm to short circuit to a cloud local address is not yet finished.
>
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is
> associated. The
> network-get output though should not include any space information
> explicitly -
> this is a concern which a charm should not care about.
>
>
> On 12/10/17 13:35, James Beedy wrote:
> > Hello all,
> >
> > In case you haven't noticed, we now have a network_get() function
> available
> > in charmhelpers.core.hookenv (in master, not stable).
> >
> > Just wanted to have a little discussion about how we are going to be
> > parsing network_get().
> >
> > I first want to address the output of network_get() for an instance
> > deployed to the default vpc, no spaces constraint, and related to another
> > instance in another model also default vpc, no spaces constraint.
> >
> > {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
> > [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}],
> 'interfacename':
> > 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
> > 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
> > 'macaddress': '1e:a2:1e:96:ec:a2'}]}
> >
> >
> > The use case I have in mind here is such that I want to provide the
> private
> > network interface address via relation data in the provides.py of my
> > interface to the relating appliication.
> >
> > This will be able to happen by calling
> > hookenv.network_get('<interface-name>') in the layer that provides the
> > interface in my charm, and passing the output to get the private
> interface
> > ip data, to then set that in the provides side of the relation.
> >
> > Tracking?
> >
> > The problem:
> >
> > The problem is such that its not so straight forward to just get the
> > private address from the output of network_get().
> >
> > As you can see above, I could filter for network interface name, but
> thats
> > about the least best way one could go about this.
> >
> > Initially, I assumed the network_get() output would look different if you
> > had specified a spaces constraint when deploying your application, but
> the
> > output was similar to no spaces, e.g. spaces aren't listed in the output
> of
> > network_get().
> >
> >
> > All in all, what I'm after is a consistent way to grep either the space
> an
> > interface is bound to, or to get the public vs private address from the
> > output of network_get(), I think this is true for every provider just
> about
> > (ones that use spaces at least).
> >
> > Instead of the dict above, I was thinking we might namespace the
> interfaces
> > inside of what type of interface they are to make it easier to decipher
> > when parsing the network_get().
> >
> > My idea is a schema like the following:
> >
> > {
> >     'private-networks': {
> >             'my-admin-space': {
> > 'addresses': [
> > {
> > 'cidr': '172.31.48.0/20',
> > 'address': '172.31.51.59'
> > }
> > ],
> > 'interfacename': 'eth0',
> > 'macaddress': '12:ba:53:58:9c:52'
> > }
> >     'public-networks': {
> >         'default': {
> > 'addresses': [
> > {
> > 'cidr': 'publicipaddress/32',
> > 'address': 'publicipaddress'
> > }
> > ],
> > }
> > 'fan-networks': {
> > 'fan-252': {
> > ....
> > ....
> >     }
> > }
> >
> > Where all interfaces bound to spaces are considered private addresses,
> and
> > with the assumption that if you don't specify a space constraint, your
> > private network interface is bound to the "default" space.
> >
> > The key thing here is the schema structure grouping the interfaces bound
> to
> > spaces inside a private-networks level in the dict, and the introduction
> of
> > the fact that if you don't specify a space, you get an address bound to
> an
> > artificial "default" space.
> >
> > I feel this would make things easier to consume, and interface to from a
> > developer standpoint.
> >
> > Is this making sense? How do others feel?
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to