Since it's nova logic to update port I guess the bug should be filed against
nova project.
@akkaris what do you think?
Also in the last statement "the port is actually bound now to the
instance" - I can't see this from "openstack server list" output, am I
missing something?
** Changed in:
So flows look the same for both 2.15 and 2.16 (no surprise here), just
that in 2.16 case this weird ofport 7 appears out of nowhere according
to vswitchd log, and in fact there's no such ofport on the bridge.
Also flow counters are zero for 2.16 case:
cookie=0xb722108b439955c3, duration=81.938s,
line 722, in wait_until_true
raise exception
neutron.tests.common.machine_fixtures.FakeMachineException: No ICMP reply
obtained from IP address 10.0.0.38
The test fails even before Local IP creation - on initial VMs
connectivity check
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Seems more related to neutron-dynamic-routing project
** Tags added: l3-bgp
** Changed in: neutron
Importance: Undecided => Medium
** Changed in: neutron
Status: New => Opinion
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Marking "Invalid" for neutron based on Brian's last comment
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958128
Title:
Neutron
Did you investigate "This has the side effect that if a rabbitmq or
neutron-server is restarted all agents that is currently reporting there
will hang for a long time until report_state times out"? Is it expected
behavior from messaging side?
** Changed in: neutron
Status: In Progress =>
** Also affects: neutron/stein
Importance: Undecided
Status: New
** Also affects: neutron/queens
Importance: Undecided
Status: New
** Also affects: neutron/rocky
Importance: Undecided
Status: New
** Changed in: neutron/queens
Status: New => Triaged
**
** Changed in: neutron
Status: In Progress => Opinion
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938788
Title:
Validate if fixed_ip given for port isn't the same as subnet's
From the log it's absolutely impossible to figure out what's wrong.
Anyway it's definitely not a Neutron issue
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Looks like your config file is missing required config values.
This is an issue of the installer.
Please file a bug to the installer project.
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
Recently neutron-ovs-tempest-dvr-ha-multinode-full (non-voting) job
start failing often. Usual test fail is:
"Details: (ServersTestJSON:setUpClass) Server
74743462-a419-4f89-a92c-0e99bc185581 failed to reach ACTIVE status and
task state "None" within the required time (196
Public bug reported:
Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py",
line 183, in func
return f(self, *args, **kwargs)
File
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
line 322,
** Also affects: oslo.privsep
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930401
Title:
Fullstack l3 agent tests failing due to timeout
** Also affects: neutron/train
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1928299
Title:
centos7 train vm live migration stops network on
rtance: Critical
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: gate-failure
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1923470
Title:
test_security_group_recrea
/bdd661d21898d573ef39448316860aa4c692b834/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py#L200
** Affects: neutron
Importance: Wishlist
Assignee: Oleg Bondarev (obondarev)
Status: In Progress
** Tags: loadimpact
** Changed in: neutron
Status: New => In Progress
--
You received this
according to OSProfiler stats) when only need to check
net existence.
** Affects: neutron
Importance: Wishlist
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: db loadimpact
** Changed in: neutron
Assignee: (unassigned) => Oleg Bondarev (obondarev)
** Chan
Please make sure you have latest neutron-lib version (2.9.0) installed
on your env, this should fix the test.
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
It's not an actual bug in Neutron, but the topic is worth a discussion.
** Changed in: neutron
Status: New => Opinion
** Tags added: ipv6
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
** Changed in: neutron
Status: In Progress => Fix Released
** Changed in: neutron
Milestone: None => wallaby-3
** Changed in: neutron
Status: Fix Released => Fix Committed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
nt.py", line 704, in
request
self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 880, in
_error_checker
raise exceptions.ServerFault(resp_body, resp=resp,
tempest.lib.exceptions.ServerFault: Got server fault
Neutron-fwaas development is stopped:
https://review.opendev.org/c/openstack/governance/+/735828/
** Changed in: neutron
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
.
** Affects: neutron
Importance: Low
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: ovs ovs-lib
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905538
Title:
Some
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905392
Title:
xanax online cod overnight
Status in neutron:
Invalid
Bug
of subport IDs.
** Affects: neutron
Importance: Wishlist
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
** Tags: loadimpact
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https
, 'message': 'Unable to complete operation on
subnet a1110e0b-d7c8-4830-b1df-e526b632aab9: One or more ports have an IP
allocation from this subnet.', 'detail': ''}
** Affects: neutron
Importance: Medium
Assignee: Oleg Bondarev (obondarev)
Status: In Progress
** Tags: l3-dvr-
00:23:55.297 40 INFO neutron.agent.dhcp.agent [req-f5107bdd-
d53a-4171-a283-de3d7cf7c708 - - - - -] Synchronizing state
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-ipam-dhcp
--
You received this bug notification becau
Public bug reported:
Problem: lack of connectivity (e.g. vxlan tunnels, OVS flows) between
nodes/VMs in L2 segment due to partial RabbitMQ unavailability, RPC
message loss or agent failure on applying fdb entry updates.
Why: currently FDB entries are sent by neutron server to L2 agents one-
way
Ok, so it's not related to sqlalchemy, as I expected it's an issue with
neutron DB object, fixed in Rocky:
https://review.opendev.org/#/c/565358/
** Changed in: neutron
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering
and hence to long
downtime.
The proposal is to add a new l3 agent config so that it handles stop
(SIGTERM) by deleting all routers. For HA routers it results in graceful
keepalived shutdown.
** Affects: neutron
Importance: Medium
Assignee: Oleg Bondarev (obondarev)
Status: New
Public bug reported:
Faced on stable/queens but applicable to master too.
On quite a heavy loaded environment it was noticed that simple floatingip list
command takes significant time (~1200 fips) while for example port list is
always faster (>7000 ports).
If enable sqlalchemy debug logs there
.
Need to handle OVSFWTagNotFound in prepare_port_filter() like was done
for update_port_filter in https://review.opendev.org/#/c/630910/
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: In Progress
--
You received this bug notification because
to VXLAN
tunneled packets being dropped.
The proposal is to set 'egress_pkt_mark = 0' explicitly for tunnel
ports. The option was added in OVS 2.8.0
(https://www.openvswitch.org/releases/NEWS-2.8.0.txt)
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev
quot;trusted" (router and dhcp ports) process_trusted_ports may
take significant time.
The proposal would be to add greenlet.sleep(0) inside loop in
process_trusted_ports - that fixed the issue on our environment.
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (
utils.delta_seconds(agent.started_at,2019-07-03 13:35:54,701.701
17220 ERROR neutron.plugins.ml2.rpc AttributeError: 'NoneType' object
has no attribute 'started_at'
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
--
You received th
signee: Oleg Bondarev (obondarev)
Status: In Progress
** Tags: sriov-pci-pt
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831622
Title:
SRIOV: agent may not register VFs
n use on network None.:
MacAddressInUseClient: Unable to complete operation for network
42915db3-4e46-4150-af9d-86d0c59d765f. The mac address 0c:c4:7a:de:ae:19 is in
use."
The proposal is to reset port's MAC address when unbinding.
** Affects: neutron
Importance: Undecided
Assignee: Ole
on/db/api.py", line 163, in
wrapped
return method(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line
716, in _create_subnet_postcommit
self.update_port(context, port_id, port_info)
File "/usr/lib/python2.7/dist-packages/neutron
Public bug reported:
release: Queens
quite a lot of advanced services enabled:
"service_plugins =
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,metering,lbaasv2,neutron.services.qos.qos_plugin.QoSPlugin,trunk,networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin,bgpvpn"
Public bug reported:
Release: Queens, ovsdb_interface=native, of_request_timeout = 30
With number of OVS ports growing on the node following errors start to
occur (starting at ~1200 ports):
ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch
n/agent/linux/utils.py:108
So centralized floating ip is deleted from gw device for some reason.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-dvr-backlog
--
You received this bug notification because you are a member of
config.agent_boot_time sec)
The proposal is to force state report right after setting start_flag.
No side effects expected.
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l2-pop
--
You received this bug notification because you are a member
-25 09:10:39,372.372 24060 WARNING oslo.privsep.daemon [-] privsep log:
OSError: [Errno 13] Permission denied: '/var/log/neutron/neutron.log'
...
24060 ERROR neutron.agent.l3.agent FailedToDropPrivileges: Privsep daemon
failed to start
** Affects: neutron
Importance: High
Assignee: Oleg Bon
Public bug reported:
The whole list of fdb entries is provided to the agent in case a port form new
network appears, or when agent is restarted.
Currently agent restart is detected by agent_boot_time option, 180 sec by
default.
In fact boot time differs depending on port count and on some
uot;vxlan-ac1ef480",output:"vxlan-ac1ef46d",output:"vxlan-
ac1ef477",...
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
** Tags: l2-pop l3-dvr-backlog
--
You received this bug notification because you are
: 365 time: 4.040
Analysis to follow.
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
** Tags: l3-dvr-backlog
** Description changed:
Under load (10 parallel threads) several requests for router interface
creation failed.
No
Works for me on Mitaka and on master, followed steps from John's comment
#6. Just added host-route on a subnet connected to 2 DVR routers instead
of manual adding a static route on VM. Marking as invalid.
** Changed in: neutron
Status: Incomplete => Invalid
--
You received this bug
Shouldn't this case be handled by specifying the proper host routes for
such a subnet (connected to several routers)?
** Changed in: neutron
Status: Confirmed => Opinion
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
.
** Affects: neutron
Importance: Medium
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
** Tags: sriov-pci-pt
** Description changed:
Scenario:
1) vfio-pci driver is used for VFs
2) 2 ports are created in neutron with binding type 'direct'
3) VMs are spawned
/agent/dhcp/agent.py:124
DHCP agent keeps trying, dhcp doesn't work.
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-ipam-dhcp
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
** Project changed: neutron => mos
** Tags added: area-neutron
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666549
Title:
Infinite router update in neutron L3 agent (HA)
Status in
: High
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
** Tags: gate-failure l3-dvr-backlog
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1660305
Title:
DVR multinode
nviron = self.get_environ()
File "/usr/local/lib/python3.5/dist-packages/eventlet/wsgi.py", line 593, in
get_environ
env['REMOTE_ADDR'] = self.client_address[0]
IndexError: index out of range
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
l see the misleading log.
The proposal would be to delete the condition and the log as they're
useless.
** Affects: neutron
Importance: Medium
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
** Tags: l3-dvr-backlog
--
You received this bug notification because you a
Ok, so "Connection refused" was a result of stale ip address on rfp
device that was not deleted after l3 agent restart with new code. If
recreate instances/floating ip from scratch everything works fine. I'm
going to backport the fix to stable/mitaka. Marking this as invalid
since the problem
/371604/1/check/gate-rally-dsvm-neutron-rally/b1c384d/
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: In Progress
** Tags: sg-fw
** Changed in: neutron
Status: New => In Progress
--
You received this bug notification because
Importance: Undecided
Status: New
** Changed in: neutron
Status: Confirmed => Invalid
** Changed in: neutron
Assignee: Oleg Bondarev (obondarev) => (unassigned)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to n
ime: http://paste.openstack.org/show/560761/
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
** Tags: l3-dvr-backlog loadimpact
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscri
Public bug reported:
On a cluster where VMs boots and deletes happen pretty intensively
following traces can pop up in neutron server log:
2016-08-05 14:08:29.575 9560 ERROR neutron.plugins.ml2.managers
[req-1b5e9a29-7f7e-48f8-84ee-19ce217cb556 - - - - -] Mechanism driver
'l2population' failed
Public bug reported:
On a large number of instances 'nova list' may return 404, probably this
is because some instances are deleted during command execution. Trace:
2016-08-05 09:30:52.666 878 ERROR nova.api.openstack
[req-707a0e40-67cf-43a9-865d-c44a678b2986 2e2a43e956f344d184e40771d59c991d
-26 14:00:45.236 13360 ERROR neutron.agent.linux.utils [-] Exit code: 1;
Stdin: ; Stdout: ; Stderr: Cannot open network namespace
"qrouter-81ef46de-f7f9-4c5e-b787-c935e0af253a": No such file or directory
this consumes memory, cpu, disk.
** Affects: neutron
Importance: Undecid
-26 14:00:45.236 13360 ERROR neutron.agent.linux.utils [-] Exit code: 1;
Stdin: ; Stdout: ; Stderr: Cannot open network namespace
"qrouter-81ef46de-f7f9-4c5e-b787-c935e0af253a": No such file or directory
this consumes memory, cpu, disk.
** Affects: neutron
Importance: Undecide
floating IP first and then associate it with
another fixed IP.
However API allows reassignment without disassociation so it should work as
well.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-dvr-backlog liberty-backport
: High
Assignee: Oleg Bondarev (obondarev)
Status: In Progress
** Tags: unittest
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595878
Title:
Memory leak in unit tests
Public bug reported:
This is a regression from commit c198710dc551bc0f79851a7801038b033088a8c2:
if there are dvr serviceable ports on the node with agent, server now will
notify agent with router_updated rather than router_removed, however when
updating router, agent will request router_info
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: In Progress
** Tags: l3-dvr-backlog
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590041
Title:
DVR: regression
Public bug reported:
After compute node reboot some ports may end up in DOWN state and
corresponding VMs lose net access.
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
** Tags: mitaka-backport-potential ovs
--
You received
.client = n_rpc.get_client(target)
File "neutron/common/rpc.py", line 174, in get_client
assert TRANSPORT is not None
AssertionError
** Affects: neutron
Importance: Low
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
** Tags: unittest
--
You received this
Public bug reported:
Commit 46ddaf4288a1cac44d8afc0525b4ecb3ae2186a3 made it possible to specify
multiple NICs per network.
However ESwitchManager now stores only one EmbSwitch per physical net (the last
one).
** Affects: neutron
Importance: High
Assignee: Vladimir Eremin (yottatsa)
New bug was filed for handling multiple nics per phys net:
https://bugs.launchpad.net/neutron/+bug/1576757
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
separating static and dynamic data in state reports
handling to reduce the amount of db updates.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: loadimpact
--
You received this bug notification because you are a member
o optimize agents notifications about resource_versions.
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neut
o the set when router is deleted.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-ipam-dhcp
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs
It appeared the fix was not complete. I'm reopening the bug, will upload
a fix shortly
** Changed in: neutron
Status: Fix Released => Triaged
** Tags removed: in-stable-liberty
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File
"/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 206, in __init__
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall super(Connection,
self).__init__(*args, **kwargs2)
2016-02-15 10:44:44.250 15419 ERR
pException('L3 agent failure to setup '
2016-02-10 05:26:54.025 682 TRACE neutron.agent.l3.router_info
FloatingIpSetupException: L3 agent failure to setup floating IPs
Need to log actual exception with traceback before reraising.
** Affects: neutron
Importance: Low
Assignee: Oleg
no different
from legacy scheduling (no extra DVR logic required for auto scheduling) so we
can bring auto scheduling for DVR routers back.
This is better for consistency and improves UX.
** Affects: neutron
Importance: Wishlist
Assignee: Oleg Bondarev (obondarev)
Status: New
0e97feb0f30bc0ef6f4fe041cb41b7aa81042263 which
changed full sync logic a bit: now l3 agent requests all ids of routers
scheduled to it first. get_router_ids() didn't call routers auto scheduling
which caused the regression.
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status
eleted
- p1 is deleted from DB
- p2 is deleted from DB
- r1 is not deleted from host1 though there are no more ports on it
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-dvr-backlog
--
You received this bug notification becau
e silently.
The proposal is to fail in case agent cannot operate in the mode it was
configured to.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-dvr-backlog
--
You received this bug notification because you are a member of Yahoo!
E
. However another vm on the same host
and the same subnet was ok. It took a while to find out what was wrong
:)
** Affects: neutron
Importance: Medium
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-dvr-backlog
--
You received this bug notification because you
I faced the bug while reworking unit tests into functional tests: when
performing steps described in the description I get:
2015-12-15 17:41:23,484ERROR [neutron.callbacks.manager] Error during
notification for neutron.db.l3_dvrscheduler_db._notify_port_delete port,
after_delete
port is deleted/migrated this may lead to router being deleted
from dvr_snat agent, which includes snat namespace deletion
Need to check agent mode and only remove router from dvr agents running
on compute nodes in this case.
** Affects: neutron
Importance: Undecided
Assignee: Oleg
On a second thought it might be not fair to require nova to wait for
some events from neutron on cleanup. Also in case of live migration vifs
on source node are deleted after vm is already migrated and ports are
active on destination node, so neutron will not send any network-vif-
unplugged events
full/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py",
line 87, in _get_ssh_connection
2015-12-04 01:17:12.572 | password=self.password)
2015-12-04 01:17:12.572 | tempest_lib.exceptions.SSHTimeout: Connection to
the 172.24.5.209 via SSH timed out.
2015-12-04 01:17:12.572 | User: cirro
Changing project to nova due to reasons described in comment #3
** Project changed: neutron => nova
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522824
Title:
DVR
might get back online
- currently autorescheduling will be continued until all routers are
rescheduled from the (already alive!) agent
The proposal is to skip rescheduling if agent is back online.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status
because metadata proxy process was started after VM boot.
Further analysis showed that l3 agent on compute node was not notified
about new VM port at the time this port was created.
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: In Progress
Nova patch: https://review.openstack.org/246910/
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: nova
Assignee: (unassigned) => Oleg Bondarev (obondarev)
** Changed in: nova
Status: New => In Progress
--
You received this bug notification b
notifications since fyllsync should bring the
agent up to date anyway.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: In Progress
** Tags: l3-ipam-dhcp loadimpact
--
You received this bug notification because you are a member of Yahoo
.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-dvr-backlog
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508869
since it didn't
changed from agent's point of view
- floating ip stays in DOWN state though it's actually active
The fix would be to always update status of floating ip if agent
actually applies it.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev
2015-10-05 09:10:29.831 34082
ERROR neutron.db.db_base_plugin_v2 [req-ea0e5480-e8ec-4014-9015-2199424f54bc ]
An exception occurred while creating the security_group:{u'security_group':
{'tenan
t_id': u'9839de92fb8049598f1c3ea8f32b9cf9', u'name':
u'rally_neutronsecgrp_F44SF1uvTciIQJlu', u'descrip
server once again.
So it's double work on both server and agent sides which might be quite
expensive at scale.
The proposal is to just use run_immediately parameter.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-ipam
oading.py", line 614, in
load_scalar_attributes
2015-09-09 01:24:36.251 10128 TRACE neutron.api.v2.resource raise
orm_exc.ObjectDeletedError(state)
2015-09-09 01:24:36.251 10128 TRACE neutron.api.v2.resource ObjectDeletedError:
Instance '' has been deleted, or its row is otherwise
not present.
2015-0
nux/ovsdb_monitor.py:44
Port deletion handling needs to be optimised on agent side.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: ovs
** Attachment added: "ovs-agent.log.gz"
https://bugs.launchpad.net/bugs/1491922/
I think we need to add explicit validation for router being set to admin state
down prior to upgrade.
This should eliminate the confusion.
** Changed in: neutron
Status: Invalid = Triaged
** Changed in: neutron
Assignee: ZongKai LI (lzklibj) = Oleg Bondarev (obondarev)
--
You
*** This bug is a duplicate of bug 1443524 ***
https://bugs.launchpad.net/bugs/1443524
** This bug is no longer a duplicate of bug 1443596
Removing an interface from a DVR router removes all SNAT ports of all
connected subnets
** This bug has been marked a duplicate of bug 1443524
-ba1d4f4294df.json HTTP/1.1 500 378
0.938119
** Affects: neutron
Importance: High
Assignee: Oleg Bondarev (obondarev)
Status: Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net
about added
routers thus ensuring no routers will be lost by agents.
** Affects: neutron
Importance: Undecided
Assignee: Oleg Bondarev (obondarev)
Status: New
** Tags: l3-ipam-dhcp
--
You received this bug notification because you are a member of Yahoo!
Engineering Team
1 - 100 of 153 matches
Mail list logo