Public bug reported:
MAC_Binding entries are used in OVN as a mechanism to learn MAC
addresses on logical ports and avoid sending ARP requests to the
network.
There is no aging mechanism for these entries [0] and the table can grow
indefinitely. In environments with for example large (eg. /16) ex
Public bug reported:
Updating the binding profile of a port will fail because of invalid
type. This suggests a bug in the Neutron code that handles the
parameters.
$ neutron --debug port-update subportD --binding:profile type=dict
parent_name=4af7ef43-597b-4747-b3ac-2b045db17374,tag=999
...
D
Public bug reported:
Right now, when port security is disabled the ML2/OVN plugin will set
the addresses field to ["unknown", "mac IP1 IP2..."]. Eg.:
port 2da76786-51f0-4217-a09b-0c16e6728588 (aka servera-port-2)
addresses: ["52:54:00:02:FA:0A 192.168.0.245", "unknown"]
There are scenari
Public bug reported:
The OVN Metadata agent reuses the metadata_workers config option from
the ML2/OVS Metadata agent.
However, it makes sense to split the option as the way both agents work
is totally different so it makes sense to have different defaults.
In OVN, the Metadata Agent will run in
Public bug reported:
Every time the agent liveness check is triggered (via API or periodically every
agent_down_time / 2 seconds), there are a lot of writes into the SB database on
the Chassis table.
These writes triggers recomputation on ovn-controller running in all nodes
having a considerabl
Public bug reported:
Right now, there's a chance that deleting a port in Neutron with ML2/OVN
actually deletes the object from Neutron DB while leaving a stale port
in the OVN NB database.
This can happen when deleting a port [0] raises a RowNotFound exception.
While it may look like it'd mean th
Public bug reported:
The routed provider networks feature doesn't work properly with OVN
backend. While API doesn't return any errors, all the ports are
allocated to the same OVN Logical Switch and besides providing no Layer2
isolation whatsoever, it won't work when multiple segments using
differe
Public bug reported:
When OVN DBs are upgraded (and restarted), there might be cases whenever
we want to accommodate things to a new schema. In this situation we
don't want to force a restart of neutron-server (or metadata agent) but
instead, detect it and run whatever is needed.
This can be achi
Public bug reported:
When a Chassis event happens in the SB database, we attempt to
reschedule any possible unhosted gateways [0] *always* due to a problem
with the existing logic:
def get_unhosted_gateways(self, port_physnet_dict, chassis_physnets,
gw_chassis):
Public bug reported:
Whenever a chassis is updated for whatever reason, we're triggering the
rescheduling mechanism [0]. As the current agent liveness check involves
updating the Chassis table quite frequently, we should avoid
rescheduling gateways for those checks (ie. when either nb_cfg or
exter
/networking-ovn/blob/6302298e9c4313f1200c543c89d92629daff9e89/networking_ovn/ovsdb/ovsdb_monitor.py#L74
** Affects: neutron
Importance: Undecided
Assignee: Daniel Alvarez (dalvarezs)
Status: In Progress
** Tags: ovn
** Tags added: ovn
--
You received this bug notification beca
Public bug reported:
If I do a DB query trying to sort by a column which is an
AssociationProxy I get the following exception:
Nov 20 14:41:20 centos.rdocloud neutron-server[11934]: ERROR
neutron.plugins.ml2.managers File
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1438, in ge
When importing that module, these event listeners are created:
https://github.com/openstack/neutron/blob/master/neutron/db/api.py#L110
and
https://github.com/openstack/neutron/blob/master/neutron/db/api.py#L134
Adding them manually fixed the issue. So far the workaround imports the file to
get
** Affects: neutron
Importance: Undecided
Assignee: Daniel Alvarez (dalvarezs)
Status: In Progress
** Also affects: networking-ovn
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is sub
Public bug reported:
When trying to resolve a hostname on a node with no nameservers
configured and only one entry is present for it in /etc/hosts (IPv4 or
IPv6), eventlet will try to fetch the other entry over the network.
This changes the behavior from what the original getaddrinfo()
implementa
Public bug reported:
When attempting to delete a port on a system with 1K ports, it takes
around 35 seconds to complete:
$ time openstack port delete port60_2
real0m34.367s
user0m3.497s
sys 0m0.187s
Log is *full* of the following messages when I issue the CLI:
neutron-server[324]:
Public bug reported:
This commit [0] fixed an issue with the subnet CIDR generation in tempest tests.
With the fix all subnets will get a gateway assigned regardless that it's
attached to a router or not so it may happen that the gateway port doesn't
exist. Normally, this shouldn't be a big deal
Public bug reported:
Running tempest test
tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_port_security_disable_security_group
fails sometimes when trying to authenticate via public key to the access
point instance [0].
After debugging, I managed to connect to the
e foo
would trigger that DHCP agent spawns the proxy for that network.
** Affects: neutron
Importance: Undecided
Assignee: Daniel Alvarez (dalvarezs)
Status: In Progress
** Changed in: neutron
Assignee: (unassigned) => Daniel Alvarez (dalvarezs)
--
You received this b
Public bug reported:
When DHCP, L3, Metadata or OVN-Metadata containers are restarted they can't
set the previous namespaces:
[heat-admin@overcloud-novacompute-0 neutron]$ sudo docker restart 8559f5a7fa45
8559f5a7fa45
[heat-admin@overcloud-novacompute-0 neutron]$ tail -f
/var/log/containers/n
Public bug reported:
In Neutron, we use haproxy to proxy metadata requests from instances to Nova
Metadata service.
By default, haproxy logs to /dev/log but in Ubuntu, those requests get
redirected by rsyslog to
/var/log/haproxy.log which is not being collected.
ubuntu@devstack:~$ cat /etc/rsy
Public bug reported:
When a network is deleted, its segments are also deleted [0]. For each
segment, it will notify about resources.SEGMENT and events.AFTER_DELETE
[1] which will turn out in calling update_network_postcommit [2].
This should be avoided since drivers expect their postcommit method
Public bug reported:
I have deployed a 3 controllers - 3 computes HA environment with ML2/OVS
and observed dataplane downtime when restarting/stopping neutron-l3
container on controllers. This is what I did:
1. Created a network, subnet, router, a VM and attached a FIP to the VM
2. Left a ping ru
ing this because if a port fails to be added
to br-int, ovsdbapp will enqueue the transaction instead of throwing an
Exception but there could be still some other exceptions I guess that
reproduces this scenario outside of ovsdbapp so we need to fix it
in Neutron.
Thanks
Daniel Alvarez
---
[0] h
Public bug reported:
At some point during some rally test, we saw this exception in ovs agent
logs:
2017-11-07 13:35:51.428 597682 DEBUG
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-62f85bb3-db4c-4485-b35c-b7c1cafb3970 3d527bdd3ede4c6a97f91b701393b8e3
5f753e92a5d740fc97
We have seen that the MAC address of the FIP changes to the qf interface of a
different controller.
However, the environment was running openstack-neutron-11.0.0-1.el7.noarch.
After upgrading to openstack-neutron-11.0.1-1.el7.noarch, this bug no longer
occurs.
Marking it as invalid.
** Changed
) bytes of data.
--- 10.0.0.113 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
[heat-admin@overcloud-novacompute-0 ~]$ arp -n | grep ".113"
10.0.0.113 ether fa:16:3e:20:f9:85 C
vlan10
** Affects: neutron
Im
Public bug reported:
With this recent change [0] we're now importing asyncio module from
pyroute2 and neutron-server fails to start if pyroute<0.4.15:
File "/opt/stack/neutron/neutron/common/eventlet_utils.py", line 25, in
monkey_patch
p_c_e = importutils.import_module('pyroute2.config.async
cts: neutron
Importance: Undecided
Assignee: Daniel Alvarez (dalvarezs)
Status: New
** Tags: functional-tests
** Changed in: neutron
Assignee: (unassigned) => Daniel Alvarez (dalvarezs)
** Tags added: functional-tests
--
You received this bug notification because you are a me
utron/blob/master/neutron/agent/l3/ha.py#L124
** Affects: neutron
Importance: Undecided
Assignee: Daniel Alvarez (dalvarezs)
Status: New
** Tags: l3-ha
** Tags added: l3-ha
** Changed in: neutron
Assignee: (unassigned) => Daniel Alvarez (dalvarezs)
--
You rece
Public bug reported:
Rally job is failing in the gate due to the following error during
cleanup [0]:
2017-03-03 13:14:56.897549 | 2017-03-03 13:14:56.897 | 2017-03-03 13:14:56.886
6099 ERROR rally.plugins.openstack.cleanup.manager
rutils.retry(resource._max_attempts, resource.delete)
2017-0
Public bug reported:
When an HA router is created, RA is enabled on the gateway interface for the
'master' router [0].
However, it is not disabled in the 'else' clause and therefore:
1. If the router was set to 'master' before, it will still have RA enabled on
its gateway interface
2. If defaul
** Also affects: oslo.rootwrap
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654287
Title:
functional test netns_cleanup failing in gate
St
Public bug reported:
The functional test for netns_cleanup has failed in the gate today [0].
Apparently, when trying to get the list of devices (ip_lib.get_devices()
'find /sys/class/net -maxdepth 1 -type 1 -printf %f') through
rootwrap_daemon, it's getting the output of the previous command ins
Public bug reported:
We've seen this functional test failing in the gate [0] and it's due to
a bug in the helper module that was written for the functional test. [1]
The problem shows up when process_spawn is not able to find a port to
listen on and the process stays running anyways. That means t
Public bug reported:
When dhcp agent is started, neutron agent-list reports its state as dead
until the initial sync is complete.
This can lead to unwanted alarms in monitoring systems, especially in
large environments where the initial sync may take hours. During this
time, systemctl shows that
Public bug reported:
gate-grenade-dsvm-neutron-multinode-ubuntu-xenial job is failing on
neutron gate
I have checked some other patches and looks like the job doesn't fail on
them so apparently it's not deterministic.
>From the logs:
[1]
2016-12-05 09:07:46.832799 | ERROR: the main setup scri
37 matches
Mail list logo