This causes resource leakage in Tempest. HA networks aren't cleaned, and, when
using limited VNI range, or VLAN range, Tests will start failing once limit is
reached.
Suggestion:
Introduce a cleaner to the tenant (not router!) cleanup code:
1. search for the HA network of this tenant (neutron
IMO this is actually a Neutron bug and Tempest reveals a race.
We should add a waiter for the update to verify it's really a race.
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team,
Public bug reported:
Legacy routers do.
HA routers don't
** Affects: horizon
Importance: Undecided
Status: New
** Tags: l3-ha
** Tags added: l3-ha
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack
Public bug reported:
When connecting a VM to more than 1 network interface, defaults of the
second subnet will override user-defined settings of the first (usually
primary) interface.
Reproduce:
1. create a VM with 2 network interfaces where:
eth0 - subnet with a GW, and a custom DNS
Public bug reported:
in neutron.conf
max_l3_agents_per_router min_l3_agents_per_router should be validated
** Affects: neutron
Importance: Undecided
Status: New
** Tags: l3-ha low-hanging-fruit
** Tags added: low-hanging-fruit
--
You received this bug notification because you
Public bug reported:
Ports created when neutron-openvswitch-agent is down are in status down
and binding:vif_type=binding_failed which is as it should be. When the
agent is rebooted it should be able to recreate the ports according to
the DB, but instead it logs a WARNING and creates the port
Public bug reported:
Trying to filter agent list to display only alive/dead agents returns
the full list
[root@RHEL7Server ~(keystone_admin)]# neutron --version
2.3.9
[root@RHEL7Server ~(keystone_admin)]# neutron --debug agent-list --alive False
DEBUG: keystoneclient.session REQ: curl -i -X
Public bug reported:
Failure to create HA router is reported to neutron.log but doesn't show
up in Error message:
[root@RHEL7Server ~(keystone_admin)]# neutron router-create my-first-ha-router
Internal Server Error (HTTP 500) (Request-ID:
req-9185be70-a028-438a-afd3-89ce3932a128)
from
Public bug reported:
Setting router admin_state_up=False disables router and thus
connectivity to all floating IPs routed through it, yet the status of
the floating ip remains ACTIVE.
How to reproduce:
1. create a VM attached external network via router
2. attach floating IP to VM
3. update
The bug was fixed in Juno release. I don't think it needs backporting to
Juno
** Changed in: neutron/juno
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
I still see this problem even when I'm using the patch.
I fear this might even got worse, since once I pass the 63 limit (and delete
all) even a single VM fails to boot
** Changed in: neutron
Status: Invalid = Confirmed
--
You received this bug notification because you are a member of
I think the status was changed to Opinion by accident. It should be
Confirmed
** Changed in: nova
Status: Opinion = Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
** Also affects: oslo.messaging
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372049
Title:
Launching multiple VMs fails over 63 instances
Public bug reported:
Assuming l3-agents have 1 NIC (ie eth0) assigned to tenant-network (tunnel)
traffic and another (ie eth1) assigned to external network,.
Disconnecting eth0 would prevent keeplived reports and trigger one of the
slaves to become master. However, since the error is outside
Public bug reported:
RHEL-7.0
Icehouse
All-In-One
Booting 63 VMs at once (with num-instances attribute) works fine.
Setup is able to support up to 100 VMs booted in ~50 bulks.
Booting 100 VMs at once, without Neutron network, so no network for the
VMs, works fine.
Booting 64 (and more) VMs
** Attachment added: nova, neutron debug logs
https://bugs.launchpad.net/neutron/+bug/1372049/+attachment/4210365/+files/64_vms_boot.tar
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering
Public bug reported:
Currently all instances have the same priority (hard coded 50)
Admin should be able to assign priority to l3-agents so that Master will be
chosen accordingly (suppose that you have an agent with smaller bandwidth than
others, you would like it to have the least amount
Public bug reported:
When cloud admin is shutting down l3-agent via API (without evacuating routers
first) it stands to reason he would like traffic to stop routing via this agent
(either for maintenance or maybe because a security breach was found...).
Currently, even when an agent is down,
Public bug reported:
Any transition (master, backup, fault) should be logged to relevant l3-agents.
fault - error level
backup - INFO (not debug)
master - info
** Affects: neutron
Importance: Undecided
Status: New
** Tags: l3-ha
--
You received this bug notification because you
Public bug reported:
trying to create a Port when the network has no more addresses, Neutron returns
this error:
No more IP addresses available on network a4e997dc-ba2e--9394-cfd89f670886.
However, when trying to create a VM in a network that has no more addresses,
the VM is created with
Public bug reported:
Since pool is an optional argument for floating-ip-create, nova should try to
auto-discover a single available pool instead of looking for an arbitrary name
from conf file.
The same way it searches for a single available network on VM creation (and
fails if there are
Public bug reported:
This patch verifies Floating IP status is updated correctly according to blue
print.
https://review.openstack.org/#/c/102700/
https://blueprints.launchpad.net/neutron/+spec/fip-op-status
VMWWare-Minesweeper fails consistently. The Jenkins gates pass ok
Please check
**
Public bug reported:
when working with jumbo frames you can set MTU in nova.conf for the instaces'
tap devices, and you can set it in neturon/plugins.ini for br-int. this allows
you to work with jumbo frames inside a network.
However, if you want to work with jumbo frames across networks,
Public bug reported:
VM can only connected (on creation or later) to a network and not to a specific
subnet on that network.
Same goes for port - you cannot create a port on a specific subnet in a network.
This is inconsistent with router-interface-add which targets a specific
subnet instead of
** No longer affects: tempest
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269407
Title:
Instance Termination delays in updating port list
Status in OpenStack Neutron (virtual
Public bug reported:
Havana on RHEL6.5
Description
===
Firewall rules can not be updated in a firewall policy after the firewall
policy creation (at least when the policy already created with a rule).
It looks like the firewall-policy-update looks only at the first char of the
policy
Public bug reported:
Havana on rhel6.5
Description
===
The FIREWALL argument of neutron firewall-update should be the first argument
in order to get the command successful, altough the help page mention that the
FIREWALL argument should be the last argument.
Scenario 1
==
#
Public bug reported:
Havana on RHEL6.5
Description
===
I moved my only firewall to admin_state_up=False, but it still enforces the
policy rules, only when I delete it - it stops enforcing the policy rules.
I also expected the status to change to INACTIVE right when I changed the
Public bug reported:
Havana on RHEL6.5
Description
===
The firewall-policy-insert-rule returns json output which is the
output of firewall-policy-show of the same policy instead of ascii
Field-Value table like all show commands return.
[root@puma10 ~(keystone_admin)]# neutron
Public bug reported:
command 'nova interface-attach sever' return error message, but ports
are being created and the VM is attached to all networks in tenant,
including a new port on the network it was already attached to.
How to reproduce:
1. create a VM
2. create at least one more network in
** Attachment added: Topology print-screen
https://bugs.launchpad.net/nova/+bug/1272896/+attachment/3957537/+files/Screenshot%20from%202014-01-26%2014%3A45%3A07.png
** Description changed:
command 'nova interface-attach sever' return error message, but ports
are being created and the VM
Public bug reported:
Neutron allows creation of multiple network with the same CIDR in the
same tenant
How to reproduce:
1. create 2 networks in the same tenant
2. for each create a subnet with cidr 10.0.0.0/24
Expected Result:
second subnet should raise an error
Actual Result:
subnet is
Public bug reported:
Switches outside the cloud try to access the old mac when reusing a floating IP
and fail.
This is only solved by initiating traffic from the VM, and updating the new MAC
at the external switch.
How to Reproduce:
1. Create and associate new Floating IP with VM and connect
@adalbas this is a tempest issue because the test needs to change, as
agreed in the mailing list
** Changed in: tempest
Status: Invalid = Incomplete
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Public bug reported:
Associating Floating IP with neutron takes too long to show up in VM's
details ('nova show' or 'compute_client.servers.get()') and even longer
when there's more than 1 VM involved.
when launching 2 VMs with floating IP you can see in the log that it passes
once:
unchecked
** Also affects: tempest
Importance: Undecided
Status: New
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
couldn't reproduce that either.
will reopen if happens again
** Changed in: horizon
Status: Incomplete = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
37 matches
Mail list logo