Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 16:33, Ian Booth wrote:
> 
> 
> On 19/10/17 15:22, John Meinel wrote:
>> So at the moment, I don't think Juju supports what you're looking for,
>> which is cross model relations without public addresses. We've certainly
>> discussed supporting all private for cross model. The main issue is that we
>> often drive parts of the firewalls (security groups) but without
>> understanding all the routing, it is hard to be sure whether things will
>> actually work.
>>
> 
> The space to which an endpoint is bound affects the behaviour here. Having 
> said
> that, there may be a bug in Juju's cross model relations code.
> 

Actually, there may be an issue with current behaviour, but not what I first
thought.

In network-get, only if an endpoint is not bound to a space does the resulting
ingress address use the public address (if one exists). If bound to a space, the
ingress addresses are set to the machine local addresses. This is wrong because
there's absolutely no guarantee an arbitrary external workload will be able to
connect to such an address - defaulting to the public address is the best choice
for most deployments.

I think network-get needs to change such that in the absence of information to
the contrary, regardless of whether an endpoint is bound to a space, the public
address should be advertised for ingress in a cross model relation.

The above implies we would need a way for the user to specify at relation time a
different ingress address for the consuming end. But that's not necessarily easy
to determine as it requires knowledge of how both sides (incl offering side)
have been deployed, and may change per relation. We don't intend to provide a
solution for this bit of the problem in Juju 2.3.


> So in the context of this doc
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> For relation data set up by Juju when a unit enters scope of a cross model 
> relation:
> 
> Juju will use the public address for advertising ingress. We have (future) 
> plans
> to support cross model relations where, in the absence of spaces, Juju can
> determine that traffic between endpoints is able to go via cloud local
> addresses, but as stated, with all the potential routing complexity involved, 
> we
> would limit this to quite restricted scenarios where it's guaranteed to work. 
> eg
> on AWS that might be same vpc/tenant/credentials or something. But we're not
> there yet and won't be for the cross model relations release in Juju 2.3.
> 
> The relation data is of course what is available to the remote unit(s) to 
> query.
> The data set up by Juju is the default, and can be overridden by a charm in a
> relation-changed hook for example.
> 
> For network-get output:
> 
> Where there is no space binding...
> 
> ... Juju will use the public address or cloud local address as above.
> 
> Where the endpoint is bound to a space...
> 
> ... Juju will populate the ingress address info in network-get to be the local
> machine addresses in that space.
> 
> So charm could call network-get and do a relation-set to put the correct
> ingress-address value in the relation data bag.
> 
> But I think the bug here is that when a unit enters scope, the default values
> Juju puts in relation data should be calculated the same as for network-get.
> Right now, the ingress address used is not space aware - if it's a cross model
> relation, Juju always uses the public address regardless of whether the 
> endpoint
> is bound to a space. If this behaviour were to be changed to match what
> network-get does, the relation data would be set up correctly(?) and there'd 
> be
> no need for the charm to override anything.
> 
>> I do believe the intended resolution is to use juju relate --via X, and
>> then X can be a space that isn't public. I'm pretty sure we don't have
>> everything wired up for that yet, and we want to make sure we can get the
>> current steps working well.
>>
> 
> juju relate --via X works at the moment by setting the egress-subnets value in
> the relation data bucket. This supports the case where the person deploying
> knows traffic from a model will egress via specific subnets, eg for a NATed
> firewall scenario. Juju itself uses this value to set firewall rules on the
> other model. There's currently no plans to support explicitly specifying what
> ingress addresses to use for either end of a cross model relation.
> 
>> The very first thing I noticed in your first email was that charms should
>> *not* be aware of spaces. The abstractions for charms are around their
>> bindings (explicit or via binding their endpoints). The goal of spaces is
>> to provide human operators a way to tell charms about their environment.
>> But you shouldn't ever have to change the name of your space to match the
>> name a charm expects.
>>
>> So if you do 'network-get BINDING -r relation' that should give you the
>> context you need to coordinate your network settings with the other
>> application. The intent is 

Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 16:33, Ian Booth wrote:
> 
> 
> On 19/10/17 15:22, John Meinel wrote:
>> So at the moment, I don't think Juju supports what you're looking for,
>> which is cross model relations without public addresses. We've certainly
>> discussed supporting all private for cross model. The main issue is that we
>> often drive parts of the firewalls (security groups) but without
>> understanding all the routing, it is hard to be sure whether things will
>> actually work.
>>
> 
> The space to which an endpoint is bound affects the behaviour here. Having 
> said
> that, there may be a bug in Juju's cross model relations code.
> 

Actually, there may be an issue with current behaviour, but not what I first
thought.

In network-get, only if an endpoint is not bound to a space does the resulting
ingress address use the public address (if one exists). If bound to a space, the
ingress addresses are set to the machine local addresses. This is wrong because
there's absolutely no guarantee an arbitrary external workload will be able to
connect to such an address - defaulting to the public address is the best choice
for most deployments.

I think network-get needs to change such that in the absence of information to
the contrary, regardless of whether an endpoint is bound to a space, the public
address should be advertised for ingress in a cross model relation.

The above implies we would need a way for the user to specify at relation time a
different ingress address for the consuming end. But that's not necessarily easy
to determine as it requires knowledge of how both sides (incl offering side)
have been deployed, and may change per relation. We don't intend to provide a
solution for this bit of the problem in Juju 2.3.


> So in the context of this doc
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> For relation data set up by Juju when a unit enters scope of a cross model 
> relation:
> 
> Juju will use the public address for advertising ingress. We have (future) 
> plans
> to support cross model relations where, in the absence of spaces, Juju can
> determine that traffic between endpoints is able to go via cloud local
> addresses, but as stated, with all the potential routing complexity involved, 
> we
> would limit this to quite restricted scenarios where it's guaranteed to work. 
> eg
> on AWS that might be same vpc/tenant/credentials or something. But we're not
> there yet and won't be for the cross model relations release in Juju 2.3.
> 
> The relation data is of course what is available to the remote unit(s) to 
> query.
> The data set up by Juju is the default, and can be overridden by a charm in a
> relation-changed hook for example.
> 
> For network-get output:
> 
> Where there is no space binding...
> 
> ... Juju will use the public address or cloud local address as above.
> 
> Where the endpoint is bound to a space...
> 
> ... Juju will populate the ingress address info in network-get to be the local
> machine addresses in that space.
> 
> So charm could call network-get and do a relation-set to put the correct
> ingress-address value in the relation data bag.
> 
> But I think the bug here is that when a unit enters scope, the default values
> Juju puts in relation data should be calculated the same as for network-get.
> Right now, the ingress address used is not space aware - if it's a cross model
> relation, Juju always uses the public address regardless of whether the 
> endpoint
> is bound to a space. If this behaviour were to be changed to match what
> network-get does, the relation data would be set up correctly(?) and there'd 
> be
> no need for the charm to override anything.
> 
>> I do believe the intended resolution is to use juju relate --via X, and
>> then X can be a space that isn't public. I'm pretty sure we don't have
>> everything wired up for that yet, and we want to make sure we can get the
>> current steps working well.
>>
> 
> juju relate --via X works at the moment by setting the egress-subnets value in
> the relation data bucket. This supports the case where the person deploying
> knows traffic from a model will egress via specific subnets, eg for a NATed
> firewall scenario. Juju itself uses this value to set firewall rules on the
> other model. There's currently no plans to support explicitly specifying what
> ingress addresses to use for either end of a cross model relation.
> 
>> The very first thing I noticed in your first email was that charms should
>> *not* be aware of spaces. The abstractions for charms are around their
>> bindings (explicit or via binding their endpoints). The goal of spaces is
>> to provide human operators a way to tell charms about their environment.
>> But you shouldn't ever have to change the name of your space to match the
>> name a charm expects.
>>
>> So if you do 'network-get BINDING -r relation' that should give you the
>> context you need to coordinate your network settings with the other
>> application. The intent is 

Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 15:22, John Meinel wrote:
> So at the moment, I don't think Juju supports what you're looking for,
> which is cross model relations without public addresses. We've certainly
> discussed supporting all private for cross model. The main issue is that we
> often drive parts of the firewalls (security groups) but without
> understanding all the routing, it is hard to be sure whether things will
> actually work.
> 

The space to which an endpoint is bound affects the behaviour here. Having said
that, there may be a bug in Juju's cross model relations code.

So in the context of this doc
https://jujucharms.com/docs/master/developer-network-primitives

For relation data set up by Juju when a unit enters scope of a cross model 
relation:

Juju will use the public address for advertising ingress. We have (future) plans
to support cross model relations where, in the absence of spaces, Juju can
determine that traffic between endpoints is able to go via cloud local
addresses, but as stated, with all the potential routing complexity involved, we
would limit this to quite restricted scenarios where it's guaranteed to work. eg
on AWS that might be same vpc/tenant/credentials or something. But we're not
there yet and won't be for the cross model relations release in Juju 2.3.

The relation data is of course what is available to the remote unit(s) to query.
The data set up by Juju is the default, and can be overridden by a charm in a
relation-changed hook for example.

For network-get output:

Where there is no space binding...

... Juju will use the public address or cloud local address as above.

Where the endpoint is bound to a space...

... Juju will populate the ingress address info in network-get to be the local
machine addresses in that space.

So charm could call network-get and do a relation-set to put the correct
ingress-address value in the relation data bag.

But I think the bug here is that when a unit enters scope, the default values
Juju puts in relation data should be calculated the same as for network-get.
Right now, the ingress address used is not space aware - if it's a cross model
relation, Juju always uses the public address regardless of whether the endpoint
is bound to a space. If this behaviour were to be changed to match what
network-get does, the relation data would be set up correctly(?) and there'd be
no need for the charm to override anything.

> I do believe the intended resolution is to use juju relate --via X, and
> then X can be a space that isn't public. I'm pretty sure we don't have
> everything wired up for that yet, and we want to make sure we can get the
> current steps working well.
> 

juju relate --via X works at the moment by setting the egress-subnets value in
the relation data bucket. This supports the case where the person deploying
knows traffic from a model will egress via specific subnets, eg for a NATed
firewall scenario. Juju itself uses this value to set firewall rules on the
other model. There's currently no plans to support explicitly specifying what
ingress addresses to use for either end of a cross model relation.

> The very first thing I noticed in your first email was that charms should
> *not* be aware of spaces. The abstractions for charms are around their
> bindings (explicit or via binding their endpoints). The goal of spaces is
> to provide human operators a way to tell charms about their environment.
> But you shouldn't ever have to change the name of your space to match the
> name a charm expects.
> 
> So if you do 'network-get BINDING -r relation' that should give you the
> context you need to coordinate your network settings with the other
> application. The intent is that we give you the right data so that it works
> whether you are in a cross model relation or just related to a local app.
> 
> John
> =:->
> 
> 
> On Oct 13, 2017 19:59, "James Beedy"  wrote:
> 
> I can give a high level of what I feel is a reasonably common use case.
> 
> I have infrastructure in two primary locations; AWS, and MAAS (at the local
> datacenter). The nodes at the datacenter have a direct fiber route via
> virtual private gateway in us-west-2, and the instances in AWS/us-west-2
> have a direct route  via the VPG to the private MAAS networks at the
> datacenter. There is no charge for data transfer from the datacenter in and
> out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
> and have the AWS instances and MAAS instances talk to each other via
> private address.
> 
> At the application level, the component/config goes something like this:
> 
> The MAAS nodes at the data center have mgmt-net, cluster-net, and
> access-net, interfaces defined, all of which get ips from their respective
> address spaces from the datacenter MAAS.
> 
> I need my elasticsearch charm to configure elasticsearch such that
> elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
> instance) -> elasticsearch to talk across the 

Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 15:22, John Meinel wrote:
> So at the moment, I don't think Juju supports what you're looking for,
> which is cross model relations without public addresses. We've certainly
> discussed supporting all private for cross model. The main issue is that we
> often drive parts of the firewalls (security groups) but without
> understanding all the routing, it is hard to be sure whether things will
> actually work.
> 

The space to which an endpoint is bound affects the behaviour here. Having said
that, there may be a bug in Juju's cross model relations code.

So in the context of this doc
https://jujucharms.com/docs/master/developer-network-primitives

For relation data set up by Juju when a unit enters scope of a cross model 
relation:

Juju will use the public address for advertising ingress. We have (future) plans
to support cross model relations where, in the absence of spaces, Juju can
determine that traffic between endpoints is able to go via cloud local
addresses, but as stated, with all the potential routing complexity involved, we
would limit this to quite restricted scenarios where it's guaranteed to work. eg
on AWS that might be same vpc/tenant/credentials or something. But we're not
there yet and won't be for the cross model relations release in Juju 2.3.

The relation data is of course what is available to the remote unit(s) to query.
The data set up by Juju is the default, and can be overridden by a charm in a
relation-changed hook for example.

For network-get output:

Where there is no space binding...

... Juju will use the public address or cloud local address as above.

Where the endpoint is bound to a space...

... Juju will populate the ingress address info in network-get to be the local
machine addresses in that space.

So charm could call network-get and do a relation-set to put the correct
ingress-address value in the relation data bag.

But I think the bug here is that when a unit enters scope, the default values
Juju puts in relation data should be calculated the same as for network-get.
Right now, the ingress address used is not space aware - if it's a cross model
relation, Juju always uses the public address regardless of whether the endpoint
is bound to a space. If this behaviour were to be changed to match what
network-get does, the relation data would be set up correctly(?) and there'd be
no need for the charm to override anything.

> I do believe the intended resolution is to use juju relate --via X, and
> then X can be a space that isn't public. I'm pretty sure we don't have
> everything wired up for that yet, and we want to make sure we can get the
> current steps working well.
> 

juju relate --via X works at the moment by setting the egress-subnets value in
the relation data bucket. This supports the case where the person deploying
knows traffic from a model will egress via specific subnets, eg for a NATed
firewall scenario. Juju itself uses this value to set firewall rules on the
other model. There's currently no plans to support explicitly specifying what
ingress addresses to use for either end of a cross model relation.

> The very first thing I noticed in your first email was that charms should
> *not* be aware of spaces. The abstractions for charms are around their
> bindings (explicit or via binding their endpoints). The goal of spaces is
> to provide human operators a way to tell charms about their environment.
> But you shouldn't ever have to change the name of your space to match the
> name a charm expects.
> 
> So if you do 'network-get BINDING -r relation' that should give you the
> context you need to coordinate your network settings with the other
> application. The intent is that we give you the right data so that it works
> whether you are in a cross model relation or just related to a local app.
> 
> John
> =:->
> 
> 
> On Oct 13, 2017 19:59, "James Beedy"  wrote:
> 
> I can give a high level of what I feel is a reasonably common use case.
> 
> I have infrastructure in two primary locations; AWS, and MAAS (at the local
> datacenter). The nodes at the datacenter have a direct fiber route via
> virtual private gateway in us-west-2, and the instances in AWS/us-west-2
> have a direct route  via the VPG to the private MAAS networks at the
> datacenter. There is no charge for data transfer from the datacenter in and
> out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
> and have the AWS instances and MAAS instances talk to each other via
> private address.
> 
> At the application level, the component/config goes something like this:
> 
> The MAAS nodes at the data center have mgmt-net, cluster-net, and
> access-net, interfaces defined, all of which get ips from their respective
> address spaces from the datacenter MAAS.
> 
> I need my elasticsearch charm to configure elasticsearch such that
> elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
> instance) -> elasticsearch to talk across the 

Re: default network space

2017-10-18 Thread John Meinel
So at the moment, I don't think Juju supports what you're looking for,
which is cross model relations without public addresses. We've certainly
discussed supporting all private for cross model. The main issue is that we
often drive parts of the firewalls (security groups) but without
understanding all the routing, it is hard to be sure whether things will
actually work.

I do believe the intended resolution is to use juju relate --via X, and
then X can be a space that isn't public. I'm pretty sure we don't have
everything wired up for that yet, and we want to make sure we can get the
current steps working well.

The very first thing I noticed in your first email was that charms should
*not* be aware of spaces. The abstractions for charms are around their
bindings (explicit or via binding their endpoints). The goal of spaces is
to provide human operators a way to tell charms about their environment.
But you shouldn't ever have to change the name of your space to match the
name a charm expects.

So if you do 'network-get BINDING -r relation' that should give you the
context you need to coordinate your network settings with the other
application. The intent is that we give you the right data so that it works
whether you are in a cross model relation or just related to a local app.

John
=:->


On Oct 13, 2017 19:59, "James Beedy"  wrote:

I can give a high level of what I feel is a reasonably common use case.

I have infrastructure in two primary locations; AWS, and MAAS (at the local
datacenter). The nodes at the datacenter have a direct fiber route via
virtual private gateway in us-west-2, and the instances in AWS/us-west-2
have a direct route  via the VPG to the private MAAS networks at the
datacenter. There is no charge for data transfer from the datacenter in and
out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
and have the AWS instances and MAAS instances talk to each other via
private address.

At the application level, the component/config goes something like this:

The MAAS nodes at the data center have mgmt-net, cluster-net, and
access-net, interfaces defined, all of which get ips from their respective
address spaces from the datacenter MAAS.

I need my elasticsearch charm to configure elasticsearch such that
elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
instance) -> elasticsearch to talk across the correct space for the AWS
instance, and the access-net space for the MAAS instance (I'm thinking this
is where bindings and '--via' might come in handy).

(I know the openstack charms have to make similar network mitigation, for
which they use the bindings, I must just be looking at it backwards, or not
looking into network bindings which are the key here I think)

For example, my web server charm in AWS will be deployed to a NAT
space/subnet, and will only get a private ip from the AWS subnet. It needs
to give the ip to elasticsearch (deployed in MAAS), and to a loadbalancer
(deploy to different model and space in the same AWS VPC) - this all seems
like there should be no issues with getting it to happen because the web
server charm only has a single ip address to be handing out, but what I'm
after here is a consistent way to be able to retrieve this information at
the charm level - but I think what you are telling me is that if I use the
functionality correctly, then I won't have to do any mitigating at the
charm/network-get level.

Looks like I need to take a deeper dive into the network bindings at the
charm level and see how that functionality fits into the bigger picture to
make the whole picture make sense.

Thanks



> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about
> public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
>
> There's some doc here to explain things in more detail
>
> https://jujucharms.com/docs/master/developer-network-primitives
>
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
>
> Depending on how the charm has been deployed, and more specifically
> whether it
> is in a cross model relation, the ingress address might be either the
> public or
> private address. Juju will decide based on a number of factors (whether
> models
> are deployed to same region, vpc, other provider specific aspects) and
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the
> public
> address (if there is one) for the ingress value for cross model relations
> - the
> algorithm to short circuit to a cloud local address is not yet finished.
>
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is
> associated. The
> 

Re: default network space

2017-10-18 Thread John Meinel
So at the moment, I don't think Juju supports what you're looking for,
which is cross model relations without public addresses. We've certainly
discussed supporting all private for cross model. The main issue is that we
often drive parts of the firewalls (security groups) but without
understanding all the routing, it is hard to be sure whether things will
actually work.

I do believe the intended resolution is to use juju relate --via X, and
then X can be a space that isn't public. I'm pretty sure we don't have
everything wired up for that yet, and we want to make sure we can get the
current steps working well.

The very first thing I noticed in your first email was that charms should
*not* be aware of spaces. The abstractions for charms are around their
bindings (explicit or via binding their endpoints). The goal of spaces is
to provide human operators a way to tell charms about their environment.
But you shouldn't ever have to change the name of your space to match the
name a charm expects.

So if you do 'network-get BINDING -r relation' that should give you the
context you need to coordinate your network settings with the other
application. The intent is that we give you the right data so that it works
whether you are in a cross model relation or just related to a local app.

John
=:->


On Oct 13, 2017 19:59, "James Beedy"  wrote:

I can give a high level of what I feel is a reasonably common use case.

I have infrastructure in two primary locations; AWS, and MAAS (at the local
datacenter). The nodes at the datacenter have a direct fiber route via
virtual private gateway in us-west-2, and the instances in AWS/us-west-2
have a direct route  via the VPG to the private MAAS networks at the
datacenter. There is no charge for data transfer from the datacenter in and
out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
and have the AWS instances and MAAS instances talk to each other via
private address.

At the application level, the component/config goes something like this:

The MAAS nodes at the data center have mgmt-net, cluster-net, and
access-net, interfaces defined, all of which get ips from their respective
address spaces from the datacenter MAAS.

I need my elasticsearch charm to configure elasticsearch such that
elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
instance) -> elasticsearch to talk across the correct space for the AWS
instance, and the access-net space for the MAAS instance (I'm thinking this
is where bindings and '--via' might come in handy).

(I know the openstack charms have to make similar network mitigation, for
which they use the bindings, I must just be looking at it backwards, or not
looking into network bindings which are the key here I think)

For example, my web server charm in AWS will be deployed to a NAT
space/subnet, and will only get a private ip from the AWS subnet. It needs
to give the ip to elasticsearch (deployed in MAAS), and to a loadbalancer
(deploy to different model and space in the same AWS VPC) - this all seems
like there should be no issues with getting it to happen because the web
server charm only has a single ip address to be handing out, but what I'm
after here is a consistent way to be able to retrieve this information at
the charm level - but I think what you are telling me is that if I use the
functionality correctly, then I won't have to do any mitigating at the
charm/network-get level.

Looks like I need to take a deeper dive into the network bindings at the
charm level and see how that functionality fits into the bigger picture to
make the whole picture make sense.

Thanks



> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about
> public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
>
> There's some doc here to explain things in more detail
>
> https://jujucharms.com/docs/master/developer-network-primitives
>
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
>
> Depending on how the charm has been deployed, and more specifically
> whether it
> is in a cross model relation, the ingress address might be either the
> public or
> private address. Juju will decide based on a number of factors (whether
> models
> are deployed to same region, vpc, other provider specific aspects) and
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the
> public
> address (if there is one) for the ingress value for cross model relations
> - the
> algorithm to short circuit to a cloud local address is not yet finished.
>
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is
> associated. The
> 

Re: default network space

2017-10-13 Thread James Beedy
I can give a high level of what I feel is a reasonably common use case.

I have infrastructure in two primary locations; AWS, and MAAS (at the local
datacenter). The nodes at the datacenter have a direct fiber route via
virtual private gateway in us-west-2, and the instances in AWS/us-west-2
have a direct route  via the VPG to the private MAAS networks at the
datacenter. There is no charge for data transfer from the datacenter in and
out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
and have the AWS instances and MAAS instances talk to each other via
private address.

At the application level, the component/config goes something like this:

The MAAS nodes at the data center have mgmt-net, cluster-net, and
access-net, interfaces defined, all of which get ips from their respective
address spaces from the datacenter MAAS.

I need my elasticsearch charm to configure elasticsearch such that
elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
instance) -> elasticsearch to talk across the correct space for the AWS
instance, and the access-net space for the MAAS instance (I'm thinking this
is where bindings and '--via' might come in handy).

(I know the openstack charms have to make similar network mitigation, for
which they use the bindings, I must just be looking at it backwards, or not
looking into network bindings which are the key here I think)

For example, my web server charm in AWS will be deployed to a NAT
space/subnet, and will only get a private ip from the AWS subnet. It needs
to give the ip to elasticsearch (deployed in MAAS), and to a loadbalancer
(deploy to different model and space in the same AWS VPC) - this all seems
like there should be no issues with getting it to happen because the web
server charm only has a single ip address to be handing out, but what I'm
after here is a consistent way to be able to retrieve this information at
the charm level - but I think what you are telling me is that if I use the
functionality correctly, then I won't have to do any mitigating at the
charm/network-get level.

Looks like I need to take a deeper dive into the network bindings at the
charm level and see how that functionality fits into the bigger picture to
make the whole picture make sense.

Thanks



> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about
> public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
>
> There's some doc here to explain things in more detail
>
> https://jujucharms.com/docs/master/developer-network-primitives
>
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
>
> Depending on how the charm has been deployed, and more specifically
> whether it
> is in a cross model relation, the ingress address might be either the
> public or
> private address. Juju will decide based on a number of factors (whether
> models
> are deployed to same region, vpc, other provider specific aspects) and
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the
> public
> address (if there is one) for the ingress value for cross model relations
> - the
> algorithm to short circuit to a cloud local address is not yet finished.
>
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is
> associated. The
> network-get output though should not include any space information
> explicitly -
> this is a concern which a charm should not care about.
>
>
> On 12/10/17 13:35, James Beedy wrote:
> > Hello all,
> >
> > In case you haven't noticed, we now have a network_get() function
> available
> > in charmhelpers.core.hookenv (in master, not stable).
> >
> > Just wanted to have a little discussion about how we are going to be
> > parsing network_get().
> >
> > I first want to address the output of network_get() for an instance
> > deployed to the default vpc, no spaces constraint, and related to another
> > instance in another model also default vpc, no spaces constraint.
> >
> > {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
> > [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}],
> 'interfacename':
> > 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
> > 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
> > 'macaddress': '1e:a2:1e:96:ec:a2'}]}
> >
> >
> > The use case I have in mind here is such that I want to provide the
> private
> > network interface address via relation data in the provides.py of my
> > interface to the relating appliication.
> >
> > This will be able to happen by calling
> > hookenv.network_get('') in the layer that provides the
> > interface in my charm, and passing the 

Re: default network space

2017-10-13 Thread James Beedy
I can give a high level of what I feel is a reasonably common use case.

I have infrastructure in two primary locations; AWS, and MAAS (at the local
datacenter). The nodes at the datacenter have a direct fiber route via
virtual private gateway in us-west-2, and the instances in AWS/us-west-2
have a direct route  via the VPG to the private MAAS networks at the
datacenter. There is no charge for data transfer from the datacenter in and
out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
and have the AWS instances and MAAS instances talk to each other via
private address.

At the application level, the component/config goes something like this:

The MAAS nodes at the data center have mgmt-net, cluster-net, and
access-net, interfaces defined, all of which get ips from their respective
address spaces from the datacenter MAAS.

I need my elasticsearch charm to configure elasticsearch such that
elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
instance) -> elasticsearch to talk across the correct space for the AWS
instance, and the access-net space for the MAAS instance (I'm thinking this
is where bindings and '--via' might come in handy).

(I know the openstack charms have to make similar network mitigation, for
which they use the bindings, I must just be looking at it backwards, or not
looking into network bindings which are the key here I think)

For example, my web server charm in AWS will be deployed to a NAT
space/subnet, and will only get a private ip from the AWS subnet. It needs
to give the ip to elasticsearch (deployed in MAAS), and to a loadbalancer
(deploy to different model and space in the same AWS VPC) - this all seems
like there should be no issues with getting it to happen because the web
server charm only has a single ip address to be handing out, but what I'm
after here is a consistent way to be able to retrieve this information at
the charm level - but I think what you are telling me is that if I use the
functionality correctly, then I won't have to do any mitigating at the
charm/network-get level.

Looks like I need to take a deeper dive into the network bindings at the
charm level and see how that functionality fits into the bigger picture to
make the whole picture make sense.

Thanks



> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about
> public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
>
> There's some doc here to explain things in more detail
>
> https://jujucharms.com/docs/master/developer-network-primitives
>
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
>
> Depending on how the charm has been deployed, and more specifically
> whether it
> is in a cross model relation, the ingress address might be either the
> public or
> private address. Juju will decide based on a number of factors (whether
> models
> are deployed to same region, vpc, other provider specific aspects) and
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the
> public
> address (if there is one) for the ingress value for cross model relations
> - the
> algorithm to short circuit to a cloud local address is not yet finished.
>
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is
> associated. The
> network-get output though should not include any space information
> explicitly -
> this is a concern which a charm should not care about.
>
>
> On 12/10/17 13:35, James Beedy wrote:
> > Hello all,
> >
> > In case you haven't noticed, we now have a network_get() function
> available
> > in charmhelpers.core.hookenv (in master, not stable).
> >
> > Just wanted to have a little discussion about how we are going to be
> > parsing network_get().
> >
> > I first want to address the output of network_get() for an instance
> > deployed to the default vpc, no spaces constraint, and related to another
> > instance in another model also default vpc, no spaces constraint.
> >
> > {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
> > [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}],
> 'interfacename':
> > 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
> > 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
> > 'macaddress': '1e:a2:1e:96:ec:a2'}]}
> >
> >
> > The use case I have in mind here is such that I want to provide the
> private
> > network interface address via relation data in the provides.py of my
> > interface to the relating appliication.
> >
> > This will be able to happen by calling
> > hookenv.network_get('') in the layer that provides the
> > interface in my charm, and passing the 

Re: default network space

2017-10-12 Thread Ian Booth
Copying in the Juju list also

On 12/10/17 22:18, Ian Booth wrote:
> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
> 
> There's some doc here to explain things in more detail
> 
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
> 
> Depending on how the charm has been deployed, and more specifically whether it
> is in a cross model relation, the ingress address might be either the public 
> or
> private address. Juju will decide based on a number of factors (whether models
> are deployed to same region, vpc, other provider specific aspects) and 
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the 
> public
> address (if there is one) for the ingress value for cross model relations - 
> the
> algorithm to short circuit to a cloud local address is not yet finished.
> 
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is associated. 
> The
> network-get output though should not include any space information explicitly 
> -
> this is a concern which a charm should not care about.
> 
> 
> On 12/10/17 13:35, James Beedy wrote:
>> Hello all,
>>
>> In case you haven't noticed, we now have a network_get() function available
>> in charmhelpers.core.hookenv (in master, not stable).
>>
>> Just wanted to have a little discussion about how we are going to be
>> parsing network_get().
>>
>> I first want to address the output of network_get() for an instance
>> deployed to the default vpc, no spaces constraint, and related to another
>> instance in another model also default vpc, no spaces constraint.
>>
>> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
>> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
>> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
>> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
>> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
>>
>>
>> The use case I have in mind here is such that I want to provide the private
>> network interface address via relation data in the provides.py of my
>> interface to the relating appliication.
>>
>> This will be able to happen by calling
>> hookenv.network_get('') in the layer that provides the
>> interface in my charm, and passing the output to get the private interface
>> ip data, to then set that in the provides side of the relation.
>>
>> Tracking?
>>
>> The problem:
>>
>> The problem is such that its not so straight forward to just get the
>> private address from the output of network_get().
>>
>> As you can see above, I could filter for network interface name, but thats
>> about the least best way one could go about this.
>>
>> Initially, I assumed the network_get() output would look different if you
>> had specified a spaces constraint when deploying your application, but the
>> output was similar to no spaces, e.g. spaces aren't listed in the output of
>> network_get().
>>
>>
>> All in all, what I'm after is a consistent way to grep either the space an
>> interface is bound to, or to get the public vs private address from the
>> output of network_get(), I think this is true for every provider just about
>> (ones that use spaces at least).
>>
>> Instead of the dict above, I was thinking we might namespace the interfaces
>> inside of what type of interface they are to make it easier to decipher
>> when parsing the network_get().
>>
>> My idea is a schema like the following:
>>
>> {
>> 'private-networks': {
>> 'my-admin-space': {
>> 'addresses': [
>> {
>> 'cidr': '172.31.48.0/20',
>> 'address': '172.31.51.59'
>> }
>> ],
>> 'interfacename': 'eth0',
>> 'macaddress': '12:ba:53:58:9c:52'
>> }
>> 'public-networks': {
>> 'default': {
>> 'addresses': [
>> {
>> 'cidr': 'publicipaddress/32',
>> 'address': 'publicipaddress'
>> }
>> ],
>> }
>> 'fan-networks': {
>> 'fan-252': {
>> 
>> 
>> }
>> }
>>
>> Where all interfaces bound to spaces are considered private addresses, and
>> with the assumption that if you don't specify a space constraint, your
>> private network interface is bound to the "default" space.
>>
>> The key thing here is the schema structure grouping the interfaces bound to
>> spaces inside a private-networks level in the dict, and the introduction of
>> the fact that if you don't specify a space, you get an address bound to an
>> artificial "default" space.
>>
>> I feel this would make things easier to consume, and interface to from a
>> developer standpoint.
>>
>> Is this making sense? How 

Re: default network space

2017-10-12 Thread Ian Booth
Copying in the Juju list also

On 12/10/17 22:18, Ian Booth wrote:
> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
> 
> There's some doc here to explain things in more detail
> 
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
> 
> Depending on how the charm has been deployed, and more specifically whether it
> is in a cross model relation, the ingress address might be either the public 
> or
> private address. Juju will decide based on a number of factors (whether models
> are deployed to same region, vpc, other provider specific aspects) and 
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the 
> public
> address (if there is one) for the ingress value for cross model relations - 
> the
> algorithm to short circuit to a cloud local address is not yet finished.
> 
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is associated. 
> The
> network-get output though should not include any space information explicitly 
> -
> this is a concern which a charm should not care about.
> 
> 
> On 12/10/17 13:35, James Beedy wrote:
>> Hello all,
>>
>> In case you haven't noticed, we now have a network_get() function available
>> in charmhelpers.core.hookenv (in master, not stable).
>>
>> Just wanted to have a little discussion about how we are going to be
>> parsing network_get().
>>
>> I first want to address the output of network_get() for an instance
>> deployed to the default vpc, no spaces constraint, and related to another
>> instance in another model also default vpc, no spaces constraint.
>>
>> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
>> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
>> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
>> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
>> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
>>
>>
>> The use case I have in mind here is such that I want to provide the private
>> network interface address via relation data in the provides.py of my
>> interface to the relating appliication.
>>
>> This will be able to happen by calling
>> hookenv.network_get('') in the layer that provides the
>> interface in my charm, and passing the output to get the private interface
>> ip data, to then set that in the provides side of the relation.
>>
>> Tracking?
>>
>> The problem:
>>
>> The problem is such that its not so straight forward to just get the
>> private address from the output of network_get().
>>
>> As you can see above, I could filter for network interface name, but thats
>> about the least best way one could go about this.
>>
>> Initially, I assumed the network_get() output would look different if you
>> had specified a spaces constraint when deploying your application, but the
>> output was similar to no spaces, e.g. spaces aren't listed in the output of
>> network_get().
>>
>>
>> All in all, what I'm after is a consistent way to grep either the space an
>> interface is bound to, or to get the public vs private address from the
>> output of network_get(), I think this is true for every provider just about
>> (ones that use spaces at least).
>>
>> Instead of the dict above, I was thinking we might namespace the interfaces
>> inside of what type of interface they are to make it easier to decipher
>> when parsing the network_get().
>>
>> My idea is a schema like the following:
>>
>> {
>> 'private-networks': {
>> 'my-admin-space': {
>> 'addresses': [
>> {
>> 'cidr': '172.31.48.0/20',
>> 'address': '172.31.51.59'
>> }
>> ],
>> 'interfacename': 'eth0',
>> 'macaddress': '12:ba:53:58:9c:52'
>> }
>> 'public-networks': {
>> 'default': {
>> 'addresses': [
>> {
>> 'cidr': 'publicipaddress/32',
>> 'address': 'publicipaddress'
>> }
>> ],
>> }
>> 'fan-networks': {
>> 'fan-252': {
>> 
>> 
>> }
>> }
>>
>> Where all interfaces bound to spaces are considered private addresses, and
>> with the assumption that if you don't specify a space constraint, your
>> private network interface is bound to the "default" space.
>>
>> The key thing here is the schema structure grouping the interfaces bound to
>> spaces inside a private-networks level in the dict, and the introduction of
>> the fact that if you don't specify a space, you get an address bound to an
>> artificial "default" space.
>>
>> I feel this would make things easier to consume, and interface to from a
>> developer standpoint.
>>
>> Is this making sense? How 

Re: default network space

2017-10-12 Thread Ian Booth
I'd like to understand the use case you have in mind a little better. The
premise of the network-get output is that charms should not think about public
vs private addresses in terms of what to put into relation data - the other
remote unit should not be exposed to things in those terms.

There's some doc here to explain things in more detail

https://jujucharms.com/docs/master/developer-network-primitives

The TL;DR: is that charms need to care about:
- what address do I bind to (listen on)
- what address do external actors use to connect to me (ingress)

Depending on how the charm has been deployed, and more specifically whether it
is in a cross model relation, the ingress address might be either the public or
private address. Juju will decide based on a number of factors (whether models
are deployed to same region, vpc, other provider specific aspects) and populate
the network-get data accordingly. NOTE: for now Juju will always pick the public
address (if there is one) for the ingress value for cross model relations - the
algorithm to short circuit to a cloud local address is not yet finished.

The content of the bind-addresses block is space aware in that these are
filtered based on the space with which the specified endpoint is associated. The
network-get output though should not include any space information explicitly -
this is a concern which a charm should not care about.


On 12/10/17 13:35, James Beedy wrote:
> Hello all,
> 
> In case you haven't noticed, we now have a network_get() function available
> in charmhelpers.core.hookenv (in master, not stable).
> 
> Just wanted to have a little discussion about how we are going to be
> parsing network_get().
> 
> I first want to address the output of network_get() for an instance
> deployed to the default vpc, no spaces constraint, and related to another
> instance in another model also default vpc, no spaces constraint.
> 
> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
> 
> 
> The use case I have in mind here is such that I want to provide the private
> network interface address via relation data in the provides.py of my
> interface to the relating appliication.
> 
> This will be able to happen by calling
> hookenv.network_get('') in the layer that provides the
> interface in my charm, and passing the output to get the private interface
> ip data, to then set that in the provides side of the relation.
> 
> Tracking?
> 
> The problem:
> 
> The problem is such that its not so straight forward to just get the
> private address from the output of network_get().
> 
> As you can see above, I could filter for network interface name, but thats
> about the least best way one could go about this.
> 
> Initially, I assumed the network_get() output would look different if you
> had specified a spaces constraint when deploying your application, but the
> output was similar to no spaces, e.g. spaces aren't listed in the output of
> network_get().
> 
> 
> All in all, what I'm after is a consistent way to grep either the space an
> interface is bound to, or to get the public vs private address from the
> output of network_get(), I think this is true for every provider just about
> (ones that use spaces at least).
> 
> Instead of the dict above, I was thinking we might namespace the interfaces
> inside of what type of interface they are to make it easier to decipher
> when parsing the network_get().
> 
> My idea is a schema like the following:
> 
> {
> 'private-networks': {
> 'my-admin-space': {
> 'addresses': [
> {
> 'cidr': '172.31.48.0/20',
> 'address': '172.31.51.59'
> }
> ],
> 'interfacename': 'eth0',
> 'macaddress': '12:ba:53:58:9c:52'
> }
> 'public-networks': {
> 'default': {
> 'addresses': [
> {
> 'cidr': 'publicipaddress/32',
> 'address': 'publicipaddress'
> }
> ],
> }
> 'fan-networks': {
> 'fan-252': {
> 
> 
> }
> }
> 
> Where all interfaces bound to spaces are considered private addresses, and
> with the assumption that if you don't specify a space constraint, your
> private network interface is bound to the "default" space.
> 
> The key thing here is the schema structure grouping the interfaces bound to
> spaces inside a private-networks level in the dict, and the introduction of
> the fact that if you don't specify a space, you get an address bound to an
> artificial "default" space.
> 
> I feel this would make things easier to consume, and interface to from a
> developer standpoint.
> 
> Is this making sense? How do others feel?
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


default network space

2017-10-11 Thread James Beedy
Hello all,

In case you haven't noticed, we now have a network_get() function available
in charmhelpers.core.hookenv (in master, not stable).

Just wanted to have a little discussion about how we are going to be
parsing network_get().

I first want to address the output of network_get() for an instance
deployed to the default vpc, no spaces constraint, and related to another
instance in another model also default vpc, no spaces constraint.

{'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
[{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
'macaddress': '1e:a2:1e:96:ec:a2'}]}


The use case I have in mind here is such that I want to provide the private
network interface address via relation data in the provides.py of my
interface to the relating appliication.

This will be able to happen by calling
hookenv.network_get('') in the layer that provides the
interface in my charm, and passing the output to get the private interface
ip data, to then set that in the provides side of the relation.

Tracking?

The problem:

The problem is such that its not so straight forward to just get the
private address from the output of network_get().

As you can see above, I could filter for network interface name, but thats
about the least best way one could go about this.

Initially, I assumed the network_get() output would look different if you
had specified a spaces constraint when deploying your application, but the
output was similar to no spaces, e.g. spaces aren't listed in the output of
network_get().


All in all, what I'm after is a consistent way to grep either the space an
interface is bound to, or to get the public vs private address from the
output of network_get(), I think this is true for every provider just about
(ones that use spaces at least).

Instead of the dict above, I was thinking we might namespace the interfaces
inside of what type of interface they are to make it easier to decipher
when parsing the network_get().

My idea is a schema like the following:

{
'private-networks': {
'my-admin-space': {
'addresses': [
{
'cidr': '172.31.48.0/20',
'address': '172.31.51.59'
}
],
'interfacename': 'eth0',
'macaddress': '12:ba:53:58:9c:52'
}
'public-networks': {
'default': {
'addresses': [
{
'cidr': 'publicipaddress/32',
'address': 'publicipaddress'
}
],
}
'fan-networks': {
'fan-252': {


}
}

Where all interfaces bound to spaces are considered private addresses, and
with the assumption that if you don't specify a space constraint, your
private network interface is bound to the "default" space.

The key thing here is the schema structure grouping the interfaces bound to
spaces inside a private-networks level in the dict, and the introduction of
the fact that if you don't specify a space, you get an address bound to an
artificial "default" space.

I feel this would make things easier to consume, and interface to from a
developer standpoint.

Is this making sense? How do others feel?
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


default network space

2017-10-11 Thread James Beedy
Hello all,

In case you haven't noticed, we now have a network_get() function available
in charmhelpers.core.hookenv (in master, not stable).

Just wanted to have a little discussion about how we are going to be
parsing network_get().

I first want to address the output of network_get() for an instance
deployed to the default vpc, no spaces constraint, and related to another
instance in another model also default vpc, no spaces constraint.

{'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
[{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
'macaddress': '1e:a2:1e:96:ec:a2'}]}


The use case I have in mind here is such that I want to provide the private
network interface address via relation data in the provides.py of my
interface to the relating appliication.

This will be able to happen by calling
hookenv.network_get('') in the layer that provides the
interface in my charm, and passing the output to get the private interface
ip data, to then set that in the provides side of the relation.

Tracking?

The problem:

The problem is such that its not so straight forward to just get the
private address from the output of network_get().

As you can see above, I could filter for network interface name, but thats
about the least best way one could go about this.

Initially, I assumed the network_get() output would look different if you
had specified a spaces constraint when deploying your application, but the
output was similar to no spaces, e.g. spaces aren't listed in the output of
network_get().


All in all, what I'm after is a consistent way to grep either the space an
interface is bound to, or to get the public vs private address from the
output of network_get(), I think this is true for every provider just about
(ones that use spaces at least).

Instead of the dict above, I was thinking we might namespace the interfaces
inside of what type of interface they are to make it easier to decipher
when parsing the network_get().

My idea is a schema like the following:

{
'private-networks': {
'my-admin-space': {
'addresses': [
{
'cidr': '172.31.48.0/20',
'address': '172.31.51.59'
}
],
'interfacename': 'eth0',
'macaddress': '12:ba:53:58:9c:52'
}
'public-networks': {
'default': {
'addresses': [
{
'cidr': 'publicipaddress/32',
'address': 'publicipaddress'
}
],
}
'fan-networks': {
'fan-252': {


}
}

Where all interfaces bound to spaces are considered private addresses, and
with the assumption that if you don't specify a space constraint, your
private network interface is bound to the "default" space.

The key thing here is the schema structure grouping the interfaces bound to
spaces inside a private-networks level in the dict, and the introduction of
the fact that if you don't specify a space, you get an address bound to an
artificial "default" space.

I feel this would make things easier to consume, and interface to from a
developer standpoint.

Is this making sense? How do others feel?
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev