[Yahoo-eng-team] [Bug 1703510] [NEW] ML2-Linuxbridge not keep routes on physical interface

2017-07-10 Thread Aihua Edward Li
Public bug reported:

When using ML2-Linuxbridge flat network, the bridge is dynamically created when 
the first VM using the flat network is provisioned. The bridge is dynamically 
deleted when the last VM using the flat network is deleted.
During the switch over, all configuration on the physical interface is moved 
from the physical device to the bridge device.
In today's ML2-Linux bridge implementation, only the ip address and the gateway 
are retained. There is no consideration to move the routes associated with the 
physical interface and they are lost.
In the case when openstack components communicate with openstack controller, 
the connectivity would be lost after the first VM is provisioned.

To reproduce the issue, add arbitrary route on the underflying physical 
interface, e.g., eth0, 
1. "route add -net 10.75.0.0/16 gw 20.20.20.1" prior to start ML2-Linux-bridge 
agent. 
2. Start ML2-Linuxbridge agent, and provision a new VM to use flat network.
3. do a route show "ip route list" and observe the 10.75.0.0 routes disapppears.

This happens from kilo code to master branch.

The relevant code on master branch is

neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py

def update_interface_ip_details(self, destination, source):
# Returns True if there were IPs or a gateway moved
updated = False
for ip_version in (constants.IP_VERSION_4, constants.IP_VERSION_6):
ips, gateway = self.get_interface_details(source, ip_version)
if ips or gateway:
self._update_interface_ip_details(destination, source, ips,
  gateway)
updated = True

return updated

only ips and gateway are considered, routes associated with the
interafce is not considered at all.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1703510

Title:
  ML2-Linuxbridge not keep routes on physical interface

Status in neutron:
  New

Bug description:
  When using ML2-Linuxbridge flat network, the bridge is dynamically created 
when the first VM using the flat network is provisioned. The bridge is 
dynamically deleted when the last VM using the flat network is deleted.
  During the switch over, all configuration on the physical interface is moved 
from the physical device to the bridge device.
  In today's ML2-Linux bridge implementation, only the ip address and the 
gateway are retained. There is no consideration to move the routes associated 
with the physical interface and they are lost.
  In the case when openstack components communicate with openstack controller, 
the connectivity would be lost after the first VM is provisioned.

  To reproduce the issue, add arbitrary route on the underflying physical 
interface, e.g., eth0, 
  1. "route add -net 10.75.0.0/16 gw 20.20.20.1" prior to start 
ML2-Linux-bridge agent. 
  2. Start ML2-Linuxbridge agent, and provision a new VM to use flat network.
  3. do a route show "ip route list" and observe the 10.75.0.0 routes 
disapppears.

  This happens from kilo code to master branch.

  The relevant code on master branch is

  neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py

  def update_interface_ip_details(self, destination, source):
  # Returns True if there were IPs or a gateway moved
  updated = False
  for ip_version in (constants.IP_VERSION_4, constants.IP_VERSION_6):
  ips, gateway = self.get_interface_details(source, ip_version)
  if ips or gateway:
  self._update_interface_ip_details(destination, source, ips,
gateway)
  updated = True

  return updated

  only ips and gateway are considered, routes associated with the
  interafce is not considered at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1703510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605713] Re: neutron subnet.shared attributes out of sync with parent network and no available CLI to update subnet.shared

2017-01-19 Thread Aihua Edward Li
Hi, Srilanth,

O.K. I just traced the git log and found the Subnet.shared was removed
in stable/liberty.

Let's close it.

 class Subnet(model_base.BASEV2, HasId, HasTenant):
@@ -200,12 +191,12 @@ class Subnet(model_base.BASEV2, HasId, HasTenant):
 dns_nameservers = orm.relationship(DNSNameServer,
backref='subnet',
cascade='all, delete, delete-orphan',
+   order_by=DNSNameServer.order,
lazy='joined')
 routes = orm.relationship(SubnetRoute,
   backref='subnet',
   cascade='all, delete, delete-orphan',
   lazy='joined')
-shared = sa.Column(sa.Boolean)
 ipv6_ra_mode = sa.Column(sa.Enum(constants.IPV6_SLAAC,
  constants.DHCPV6_STATEFUL,
  constants.DHCPV6_STATELESS,
@@ -214,6 +205,7 @@ class Subnet(model_base.BASEV2, HasId, HasTenant):
   constants.DHCPV6_STATEFUL,
   constants.DHCPV6_STATELESS,
   name='ipv6_address_modes'), nullable=True)


** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605713

Title:
  neutron subnet.shared attributes out of sync with parent network and
  no available CLI to update subnet.shared

Status in neutron:
  Invalid

Bug description:
  1. create a network with shared=0.
  2. create a subnet for the network
  3. update the network with shared attributes.
  4. examine the shared in neutron subnet db, the shared remains 0 while the 
shared in network table is updated to 1.

  e.g.

  neutron net-create --provider:physical_network phsynet1 
--provider:network_type flat net-1
  neutron subnet-create net-1 192.168.30.0/24
  neutron net-update --shared net-1

  now examine the database directly for the subnet 192.168.30.0, its
  shared attribute remains as 0.

  There is no CLI available to update it.

  versions tested:
  neutron stable/kilo
  neutron client: 2.6.0
  linux distro: Linux 12.04, Linux 14.04

  Expected, one of the following solutions:
  1). remove subnet.shared attribute and use its parent shared attribute.
  2). have a mechanism to update subnet.shared

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609258] Re: neutron ml2 local vlan assignment is nondeterministic

2016-10-03 Thread Aihua Edward Li
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609258

Title:
  neutron ml2 local vlan assignment is nondeterministic

Status in neutron:
  Invalid

Bug description:
  When neutron ovs agent assign local vlan to a logic network, the assignment 
is non deterministic.
  For example, in our deployment, we typically have one or two network 
associated to one hypervisor. We expect the local vlan assignment to have vlan 
tag 1 or 2, but as the ports get deleted and vlan reclaimed, the vlan 
assignment would keeps increasing unexpected.
  While there is no functional impact to the data-path, for a large scale 
deployment with thousands of hypervisors, we would expect the assignment to be 
deterministic and consistent across all hypervisors.

  Pre-condition:

  neutron network already configured, and is associated with the hypervisor 
under test
  make the hypervisor clean and no VMs landed on it.

  # service neutron-plugin-openvswitch-agent restart
  # ovs-vsctl show | grep tag
  

  Steps to reproduce the issue.
  1. boot one VM on the given hypervisor
  $ nova boot --flavor g1-standard-2 --image  cr_sanity 
--availability-zone nova:h5wy282

  2. on hypervisor
  # ovs-vsctl show | grep tag
  tag: 1

  ovs-agent log:
  INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] 528 [-OVS-AGENT-] 
Assigning 1 as local vlan for net-id=ffd84cfd-e7d9-435c-a7c4-61600974d0af

  3. delete the first VM.
  $ nova delete 

  INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] 629 [-OVS-
  AGENT-] Reclaiming vlan = 1 from net-id = ffd84cfd-e7d9-435c-
  a7c4-61600974d0af

  4. boot another VM on the same hypervisor
  $ nova boot --flavor g1-standard-2 --image  cr_sanity 
--availability-zone nova:h5wy282

  5. on hypervisor
  # ovs-vsctl show | grep tag
  tag: 2

  INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] 528 [-OVS-
  AGENT-] Assigning 2 as local vlan for net-id=ffd84cfd-e7d9-435c-
  a7c4-61600974d0af

  Expected:
  vlan 1 is reclaimed and should be assigned when the VM is spawn the second 
time.

  Actual:
  on second time VM is created, vlan 2 is assigned.

  Openstack version: stable/kilo
  Linux: ubuntu 12.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1609258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605713] Re: neutron subnet.shared attributes out of sync with parent network and no available CLI to update subnet.shared

2016-08-07 Thread Aihua Edward Li
Please follow the steps exactly. The subnet shared attribute follows
network's initial setting, but does not get updated when you use update
het network's shared attribute.


** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605713

Title:
  neutron subnet.shared attributes out of sync with parent network and
  no available CLI to update subnet.shared

Status in neutron:
  New

Bug description:
  1. create a network with shared=0.
  2. create a subnet for the network
  3. update the network with shared attributes.
  4. examine the shared in neutron subnet db, the shared remains 0 while the 
shared in network table is updated to 1.

  e.g.

  neutron net-create --provider:physical_network phsynet1 
--provider:network_type flat net-1
  neutron subnet-create net-1 192.168.30.0/24
  neutron net-update --shared net-1

  now examine the database directly for the subnet 192.168.30.0, its
  shared attribute remains as 0.

  There is no CLI available to update it.

  versions tested:
  neutron stable/kilo
  neutron client: 2.6.0
  linux distro: Linux 12.04, Linux 14.04

  Expected, one of the following solutions:
  1). remove subnet.shared attribute and use its parent shared attribute.
  2). have a mechanism to update subnet.shared

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609258] [NEW] neutron ml2 local vlan assignment is nondeterministic

2016-08-03 Thread Aihua Edward Li
Public bug reported:

When neutron ovs agent assign local vlan to a logic network, the assignment is 
non deterministic.
For example, in our deployment, we typically have one or two network associated 
to one hypervisor. We expect the local vlan assignment to have vlan tag 1 or 2, 
but as the ports get deleted and vlan reclaimed, the vlan assignment would 
keeps increasing unexpected. 
While there is no functional impact to the data-path, for a large scale 
deployment with thousands of hypervisors, we would expect the assignment to be 
deterministic and consistent across all hypervisors.


Pre-condition: 

neutron network already configured, and is associated with the hypervisor under 
test
make the hypervisor clean and no VMs landed on it.
service service neutron-plugin-openvswitch-agent restart
ovs-vsctl show | grep tag


Steps to reproduce the issue.
1. boot one VM on the given hypervisor
nova boot --flavor g1-standard-2 --image  cr_sanity 
--availability-zone nova:h5wy282

2. on hypervisor
ovs-vsctl show | grep tag
tag: 1

ovs-agent log:
INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] 528 [-OVS-AGENT-] 
Assigning 1 as local vlan for net-id=ffd84cfd-e7d9-435c-a7c4-61600974d0af

3. delete the first VM.
nova delete 

INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] 629 [-OVS-
AGENT-] Reclaiming vlan = 1 from net-id = ffd84cfd-e7d9-435c-
a7c4-61600974d0af


4. boot another VM on the same hypervisor
nova boot --flavor g1-standard-2 --image  cr_sanity 
--availability-zone nova:h5wy282

5. on hypervisor
ovs-vsctl show | grep tag
tag: 2

INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] 528 [-OVS-
AGENT-] Assigning 2 as local vlan for net-id=ffd84cfd-e7d9-435c-
a7c4-61600974d0af

Expected:
vlan 1 is reclaimed and should be assigned when the VM is spawn the second time.

Actual:
on second time VM is created, vlan 2 is assigned.

Openstack version: stable/kilo
Linux: ubuntu 12.04

** Affects: neutron
 Importance: Undecided
 Assignee: Aihua Edward Li (aihuaedwardli)
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609258

Title:
  neutron ml2 local vlan assignment is nondeterministic

Status in neutron:
  New

Bug description:
  When neutron ovs agent assign local vlan to a logic network, the assignment 
is non deterministic.
  For example, in our deployment, we typically have one or two network 
associated to one hypervisor. We expect the local vlan assignment to have vlan 
tag 1 or 2, but as the ports get deleted and vlan reclaimed, the vlan 
assignment would keeps increasing unexpected. 
  While there is no functional impact to the data-path, for a large scale 
deployment with thousands of hypervisors, we would expect the assignment to be 
deterministic and consistent across all hypervisors.

  
  Pre-condition: 

  neutron network already configured, and is associated with the hypervisor 
under test
  make the hypervisor clean and no VMs landed on it.
  service service neutron-plugin-openvswitch-agent restart
  ovs-vsctl show | grep tag
  

  Steps to reproduce the issue.
  1. boot one VM on the given hypervisor
  nova boot --flavor g1-standard-2 --image  cr_sanity 
--availability-zone nova:h5wy282

  2. on hypervisor
  ovs-vsctl show | grep tag
  tag: 1

  ovs-agent log:
  INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] 528 [-OVS-AGENT-] 
Assigning 1 as local vlan for net-id=ffd84cfd-e7d9-435c-a7c4-61600974d0af

  3. delete the first VM.
  nova delete 

  INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] 629 [-OVS-
  AGENT-] Reclaiming vlan = 1 from net-id = ffd84cfd-e7d9-435c-
  a7c4-61600974d0af

  
  4. boot another VM on the same hypervisor
  nova boot --flavor g1-standard-2 --image  cr_sanity 
--availability-zone nova:h5wy282

  5. on hypervisor
  ovs-vsctl show | grep tag
  tag: 2

  INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] 528 [-OVS-
  AGENT-] Assigning 2 as local vlan for net-id=ffd84cfd-e7d9-435c-
  a7c4-61600974d0af

  Expected:
  vlan 1 is reclaimed and should be assigned when the VM is spawn the second 
time.

  Actual:
  on second time VM is created, vlan 2 is assigned.

  Openstack version: stable/kilo
  Linux: ubuntu 12.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1609258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605713] [NEW] neutron subnet.shared attributes out of sync with parent network and no available CLI to update subnet.shared

2016-07-22 Thread Aihua Edward Li
Public bug reported:

1. create a network with shared=0.
2. create a subnet for the network
3. update the network with shared attributes.
4. examine the shared in neutron subnet db, the shared remains 0 while the 
shared in network table is updated to 1.

e.g.

neutron net-create --provider:physical_network phsynet1 --provider:network_type 
flat net-1
neutron subnet-create net-1 192.168.30.0/24
neutron net-update --shared net-1

now examine the database directly for the subnet 192.168.30.0, its
shared attribute remains as 0.

There is no CLI available to update it.

versions tested:
neutron stable/kilo
neutron client: 2.6.0
linux distro: Linux 12.04, Linux 14.04

Expected, one of the following solutions:
1). remove subnet.shared attribute and use its parent shared attribute.
2). have a mechanism to update subnet.shared

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  1. create a network with shared=0.
  2. create a subnet for the network
  3. update the network with shared attributes.
  4. examine the shared in neutron subnet db, the shared remains 0 while the 
shared in network table is updated to 1.
  
  e.g.
  
  neutron net-create --provider:physical_network phsynet1 
--provider:network_type flat net-1
- neutron subnet-create 192.168.30.0/24
+ neutron subnet-create net-1 192.168.30.0/24
  neutron net-update --shared net-1
  
  now examine the database directly for the subnet 192.168.30.0, its
  shared attribute remains as 0.
  
  There is no CLI available to update it.
  
  versions tested:
  neutron stable/kilo
  neutron client: 2.6.0
  linux distro: Linux 12.04, Linux 14.04
  
  Expected, one of the following solutions:
  1). remove subnet.shared attribute and use its parent shared attribute.
  2). have a mechanism to update subnet.shared

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605713

Title:
  neutron subnet.shared attributes out of sync with parent network and
  no available CLI to update subnet.shared

Status in neutron:
  New

Bug description:
  1. create a network with shared=0.
  2. create a subnet for the network
  3. update the network with shared attributes.
  4. examine the shared in neutron subnet db, the shared remains 0 while the 
shared in network table is updated to 1.

  e.g.

  neutron net-create --provider:physical_network phsynet1 
--provider:network_type flat net-1
  neutron subnet-create net-1 192.168.30.0/24
  neutron net-update --shared net-1

  now examine the database directly for the subnet 192.168.30.0, its
  shared attribute remains as 0.

  There is no CLI available to update it.

  versions tested:
  neutron stable/kilo
  neutron client: 2.6.0
  linux distro: Linux 12.04, Linux 14.04

  Expected, one of the following solutions:
  1). remove subnet.shared attribute and use its parent shared attribute.
  2). have a mechanism to update subnet.shared

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597596] [NEW] network not always cleaned up when spawning VMs

2016-06-29 Thread Aihua Edward Li
Public bug reported:

Here are the scenario:
1). Nova scheduler/conductor selects a nova-compute A to spin a VM
2). Nova compute A tries to spin the VM, but the process failed, and generates 
a RE-SCHEDULE exception.
3). in re-schedule exception, only when retry is none, network resource is 
properly cleaned up. when retry is not none, the network is not cleaned up, the 
port information still stays with the VM.
4). Nova condutor was notified about the failure. It selects nova-compute-B to 
spin VM.
5). nova compute B spins up VM successfully. However, from the 
instance_info_cache, the network_info showed two ports allocated for VM, one 
from the origin network A that associated with nova-compute A nad one from 
network B that associated with nova compute B.

To simulate the case, raise a fake exception in
_do_build_and_run_instance in nova-compute A:

diff --git a/nova/compute/manager.py b/nova/compute/manager.py
index ac6d92c..8ce8409 100644
--- a/nova/compute/manager.py
+++ b/nova/compute/manager.py
@@ -1746,6 +1746,7 @@ class ComputeManager(manager.Manager):
 filter_properties)
 LOG.info(_LI('Took %0.2f seconds to build instance.'),
  timer.elapsed(), instance=instance)
+raise exception.RescheduledException( instance_uuid=instance.uuid, 
reason="simulated-fault")
 return build_results.ACTIVE
 except exception.RescheduledException as e:
 retry = filter_properties.get('retry')

environments: 
*) nova master branch
*) ubuntu 12.04
*) kvm
*) bridged network.

** Affects: nova
 Importance: Undecided
 Assignee: Aihua Edward Li (aihuaedwardli)
 Status: New

** Summary changed:

- network not alwasy cleaned up when spawning VMs
+ network not always cleaned up when spawning VMs

** Changed in: nova
 Assignee: (unassigned) => Aihua Edward Li (aihuaedwardli)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597596

Title:
  network not always cleaned up when spawning VMs

Status in OpenStack Compute (nova):
  New

Bug description:
  Here are the scenario:
  1). Nova scheduler/conductor selects a nova-compute A to spin a VM
  2). Nova compute A tries to spin the VM, but the process failed, and 
generates a RE-SCHEDULE exception.
  3). in re-schedule exception, only when retry is none, network resource is 
properly cleaned up. when retry is not none, the network is not cleaned up, the 
port information still stays with the VM.
  4). Nova condutor was notified about the failure. It selects nova-compute-B 
to spin VM.
  5). nova compute B spins up VM successfully. However, from the 
instance_info_cache, the network_info showed two ports allocated for VM, one 
from the origin network A that associated with nova-compute A nad one from 
network B that associated with nova compute B.

  To simulate the case, raise a fake exception in
  _do_build_and_run_instance in nova-compute A:

  diff --git a/nova/compute/manager.py b/nova/compute/manager.py
  index ac6d92c..8ce8409 100644
  --- a/nova/compute/manager.py
  +++ b/nova/compute/manager.py
  @@ -1746,6 +1746,7 @@ class ComputeManager(manager.Manager):
   filter_properties)
   LOG.info(_LI('Took %0.2f seconds to build instance.'),
timer.elapsed(), instance=instance)
  +raise exception.RescheduledException( 
instance_uuid=instance.uuid, reason="simulated-fault")
   return build_results.ACTIVE
   except exception.RescheduledException as e:
   retry = filter_properties.get('retry')

  environments: 
  *) nova master branch
  *) ubuntu 12.04
  *) kvm
  *) bridged network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1597596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503462] [NEW] REST API extension to return IP Margin

2015-10-06 Thread Aihua Edward Li
Public bug reported:

It is desirable to have an API to return the number of available IP-
address for each network. This information can used to assist nova to
better schedule instance creation on networks with enough margin. It can
also be used by monitoring tool to provide capacity monitoring,
resulting better resource planning.

The proposed API is GET on /v2.0/, the returned data is in
the form of

{ "network-1-uuid": count1, "network2-uunid", count2 }

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503462

Title:
  REST API extension to return IP Margin

Status in neutron:
  New

Bug description:
  It is desirable to have an API to return the number of available IP-
  address for each network. This information can used to assist nova to
  better schedule instance creation on networks with enough margin. It
  can also be used by monitoring tool to provide capacity monitoring,
  resulting better resource planning.

  The proposed API is GET on /v2.0/, the returned data is in
  the form of

  { "network-1-uuid": count1, "network2-uunid", count2 }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp