Public bug reported:
During the verification of this RFE [1] I've encountered improper
behavior.
When the subnet update is decreasing the number of IP's in the range, and
FloatingIp is already in use, the update succeeds and the
excluded FloatingIp continue to function.
>From the customer point
Public bug reported:
The subnet update of allocation-pool not available via Horizon
Please see following RFE: https://bugs.launchpad.net/neutron/+bug/572
** Affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Y
** Changed in: horizon
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394051
Title:
Can't display port list on a shared
Public bug reported:
During scale tests (80 instances) a few instances created and their
status is ACTIVE (nova list/show) even though their TAP divices is DOWN
and they don't have an IP address.
Version - Icehouse with RHEL7
GRE+ML2 - All-In-One+ Compute node
openstack-nova-cert-2014.1-7.el7ost
Public bug reported:
ovs_neutron_plugin doesn't implement multiple RPC workers, only ML2 supports
multiple RPC
workers.
Version - Icehouse with RHEL7
All-In-One+ Compute node
openstack-nova-cert-2014.1-7.el7ost.noarch
openstack-neutron-openvswitch-2014.1-35.el7ost.noarch
openstack-nova-compute-
Public bug reported:
The setup: Controller, Compute and 2 Network nodes
KILO - VRRP on RHEL7.1
Trying to delete all "alive" namespaces (alive - router with attached interface
to the network)
The command succeeded but a lot of error messages appears.
neutron-netns-cleanup --config-file=/etc/n
Public bug reported:
Version: Newton
Tested with 3 controllers with dhcp agent on each controller.
Returns all available networks instead of the internal networks that managed by
this agent.
I've created new network with new subnet with --dhcp-disable and still
I got this network in the lis
Public bug reported:
The qos negative test fails [1] on top of Newton release with following
error:
"Request Failed: internal server error while processing your request.
Neutron server returns request_ids:
['req-c8fadf15-dce8-4c2f-943a-3cedc67f']"
Instead of "Bad Request" error.
[1]
neut
Public bug reported:
Tested on Liberty
Step to reproduce:
Enabling of nova-neutron notifications:
On nova.conf:
vif_plugging_is_fatal = True
vif_plugging_timeout = 300
On neutron.conf:
notify_nova_on_port_data_changes = True
notify_nova_on_port_status_changes = True
1)Delete an instance
2)Deleti
9 matches
Mail list logo