Public bug reported:
The ovs-agent will lost some tunnels to other nodes, for instance to DHCP node
or L3 node, these lost tunnels can sometimes cause VM failed to boot or
dataplane down.
When subnets or security group ports quantity reaches 2000+, this issue can be
seen in high probability.
Public bug reported:
Ovs-agent will lost some flows during restart, for instance, flows to DHCP or
L3, tunnel flows. These lost flows can sometimes cause VM failed to boot or
dataplane down.
When subnets or security group ports quantity reaches 2000+, this issue can be
seen in high
Public bug reported:
ovs-agent clean stale flows action will dump all the bridge flows first. When
subnets or security group ports quantity reach 2000+, this will become really
time-consuming.
And sometimes this dump action can also get failed, then the ovs-agent will
dump again. And things
Public bug reported:
When subnets or security group ports quantity reach 2000+, there are many stale
flows.
Some basic exception procedure:
(1) ovs-agent dump-flows
(2) ovs-agent delete some flows
(3) ovs-agent install new flows (with new cookies)
(4) any exception raise in (2) or (3), such as
Public bug reported:
When subnets or security group ports quantity reach 2000+, it is really
too hard to do trouble shooting if one VM lost the connection. The flow
tables are almost unreadable (reach 30k+ flows). We have no way to check
the ovs-agent flow status. And restart the L2 agent does
Public bug reported:
When subnets or security group ports quantity reach 2000+, the ovs-agent
connection to ovs-vswitchd may get lost, drop or timeout during restart.
This is a subproblem of bug #1813703, for more information, please see the
summary:
Public bug reported:
When ports quantity under one subnet or security group reaches 2000+, the
ovs-agent will always get RPC timeout during restart.
This is a subproblem of bug #1813703, for more information, please see the
summary:
https://bugs.launchpad.net/neutron/+bug/1813703
** Affects:
Public bug reported:
When subnets or security group ports quantity reach 2000+, the ovs-agent failed
to restart and do fullsync infinitely.
This is a subproblem of bug #1813703, for more information, please see the
summary:
https://bugs.launchpad.net/neutron/+bug/1813703
** Affects: neutron
This is strange. I will take a closer look.
** Changed in: zun
Status: Won't Fix => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1813459
Title:
Public bug reported:
When subnets or security group ports quantity reach 2000+, the ovs-agent will
take more than 15-40 mins+ to restart.
During this restart time, the ovs will not process any port, aka VM booting on
this host will not get the L2 flows established.
This is a subproblem of bug
Public bug reported:
[L2] [summary] ovs-agent issues at large scale
Recently we have tested the ovs-agent with the openvswitch flow based
security group, and we met some issues at large scale. This bug will
give us a centralized location to track the following problems.
Problems:
(1) RPC
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]
** Changed in: nova
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
** Changed in: zun
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1813459
Title:
Mounting the cinder storage dashboard shows that
@mustang,
I see the problem now. Since horizon assumes cinder volume must be
attached to nova instance, it tried to locate the nova instance from the
volume attachment and failed. The error you were seeing is due to a
request to locate the nova instance and nova returned a 404 response.
For now,
Public bug reported:
When running selenium tests locally, web elements appear in time for
selenium to pick up on them. For example, checking for the green
confirmation box when switching between projects. However, if the pop up
takes too long, selenium fails. I have also run into issues where a
This bug was fixed in the package cloud-init -
18.5-17-gd1a2fe73-0ubuntu1
---
cloud-init (18.5-17-gd1a2fe73-0ubuntu1) disco; urgency=medium
* New upstream snapshot.
- opennebula: exclude EPOCHREALTIME as known bash env variable with a
delta (LP: #1813383)
- tox: fix
This bug was fixed in the package cloud-init -
18.5-17-gd1a2fe73-0ubuntu1
---
cloud-init (18.5-17-gd1a2fe73-0ubuntu1) disco; urgency=medium
* New upstream snapshot.
- opennebula: exclude EPOCHREALTIME as known bash env variable with a
delta (LP: #1813383)
- tox: fix
Public bug reported:
Just like recent BUG: #1813383 bash on disco now exposed EPOCHSECONDS
environment variable as well as EPOCHREALTIME.
OpenNebula datasource inspects all bash environment variables in order to
surfaces variables which have changed across bash env invocations. Since
Public bug reported:
This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:
- [x] This doc is inaccurate in this way: Deprecated options
- [ ] This is a doc addition request.
- [ ] I
Linked Horizon to this bug for historical context. The maintainers for
python-openstackclient no longer use launchpad, so we'll have to track
this separately with Storyboard [0].
[0] https://storyboard.openstack.org/#!/project_group/80
** Also affects: horizon
Importance: Undecided
As per https://docs.openstack.org/nova/latest/user/cells.html#status the
current status for Cells V1 is *DEPRECATED*.
As you can see above, only regressions that aren't due to Cells V1 will be
fixed, but this specific bug doesn't meet the criterias. As moving from Cells
v1 to v2 or not using
Reviewed: https://review.openstack.org/627540
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=a19c38a6ab13cdf2509a1f9f9d39c7f0a70ba121
Submitter: Zuul
Branch:master
commit a19c38a6ab13cdf2509a1f9f9d39c7f0a70ba121
Author: arches
Date: Thu Dec 27 17:25:48 2018 +0200
Reviewed: https://review.openstack.org/629960
Committed:
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4d45699f155a6aa5732c27b572d3288963638ee3
Submitter: Zuul
Branch:master
commit 4d45699f155a6aa5732c27b572d3288963638ee3
Author: LIU Yulong
Date: Fri Jan 11 09:34:35 2019
Public bug reported:
Unit test
neutron.tests.functional.services.portforwarding.test_port_forwarding.PortForwardingTestCase.test_concurrent_create_port_forwarding_delete_port
is failing from time to time since few days.
Example of failure http://logs.openstack.org/57/628057/5/gate/neutron-
24 matches
Mail list logo