[Yahoo-eng-team] [Bug 2002027] Re: Instances "Image Name" filter fails if image without a name exist

2023-01-17 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/869400
Committed: 
https://opendev.org/openstack/horizon/commit/827d453d7a2e7301da3cabbc34307a765ef3726d
Submitter: "Zuul (22348)"
Branch:master

commit 827d453d7a2e7301da3cabbc34307a765ef3726d
Author: Stanislav Dmitriev 
Date:   Thu Jan 5 15:28:46 2023 -0500

Fix Image Filter for images with None names

Replacing None with empty string to fix non_api_filters
processing if resources have None attributes

Closes-Bug: #2002027
Change-Id: I57493837cede7747bbb634959ace28b2feffb543


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2002027

Title:
  Instances "Image Name" filter fails if image without a name exist

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Horizon breaks if "Instances page" "Image Name" filter is invoked and
  there is an image without name exists.

  How to reproduce:

  1. Create an image without a name
  image list --project 66ae0f36c4184e37b159ef3e9f39ea56 -c Name | more
  +---+
  | Name  |
  +---+
  | None  |
  | bionic-server |
  +---+

  2. Go to Instances tab
  3. Put random value to "Image Name" field and press Filter

  
  Error trace:

  [Thu Jan 05 19:07:29.668641 2023] [:error] [pid 31] Traceback (most recent 
call last):
  [Thu Jan 05 19:07:29.668647 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/django/core/handlers/exception.py", line 41, 
in inner
  [Thu Jan 05 19:07:29.668651 2023] [:error] [pid 31] response = 
get_response(request)
  [Thu Jan 05 19:07:29.668656 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 187, in 
_get_response
  [Thu Jan 05 19:07:29.668664 2023] [:error] [pid 31] response = 
self.process_exception_by_middleware(e, request)
  [Thu Jan 05 19:07:29.668669 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 185, in 
_get_response
  [Thu Jan 05 19:07:29.668695 2023] [:error] [pid 31] response = 
wrapped_callback(request, *callback_args, **callback_kwargs)
  [Thu Jan 05 19:07:29.668699 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in dec
  [Thu Jan 05 19:07:29.668705 2023] [:error] [pid 31] return 
view_func(request, *args, **kwargs)
  [Thu Jan 05 19:07:29.668708 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec
  [Thu Jan 05 19:07:29.668722 2023] [:error] [pid 31] return 
view_func(request, *args, **kwargs)
  [Thu Jan 05 19:07:29.668725 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec
  [Thu Jan 05 19:07:29.668735 2023] [:error] [pid 31] return 
view_func(request, *args, **kwargs)
  [Thu Jan 05 19:07:29.668749 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 113, in dec
  [Thu Jan 05 19:07:29.668758 2023] [:error] [pid 31] return 
view_func(request, *args, **kwargs)
  [Thu Jan 05 19:07:29.668762 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 84, in dec
  [Thu Jan 05 19:07:29.668765 2023] [:error] [pid 31] return 
view_func(request, *args, **kwargs)
  [Thu Jan 05 19:07:29.668775 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in 
view
  [Thu Jan 05 19:07:29.668778 2023] [:error] [pid 31] return 
self.dispatch(request, *args, **kwargs)
  [Thu Jan 05 19:07:29.668785 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 88, in 
dispatch
  [Thu Jan 05 19:07:29.668795 2023] [:error] [pid 31] return 
handler(request, *args, **kwargs)
  [Thu Jan 05 19:07:29.668799 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 223, in get
  [Thu Jan 05 19:07:29.668802 2023] [:error] [pid 31] handled = 
self.construct_tables()
  [Thu Jan 05 19:07:29.668806 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 214, in 
construct_tables
  [Thu Jan 05 19:07:29.668817 2023] [:error] [pid 31] handled = 
self.handle_table(table)
  [Thu Jan 05 19:07:29.668826 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 123, in 
handle_table
  [Thu Jan 05 19:07:29.668830 2023] [:error] [pid 31] data = 
self._get_data_dict()
  [Thu Jan 05 19:07:29.668836 2023] [:error] [pid 31]   File 
"/usr/lib/python2.7/

[Yahoo-eng-team] [Bug 1684069] Re: No tests available for availability-zone, network-availability-zone, router-availability-zone under neutron tempest tests.

2023-01-17 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684069

Title:
  No tests available for availability-zone, network-availability-
  zone,router-availability-zone under neutron tempest tests.

Status in neutron:
  Fix Released

Bug description:
   There are AZ tests in
  tempest tree, but they cover compute and storage only. AZ tests should also 
be added for Neutron, filing this bug for the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684069/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691885] Re: Updating Nova::Server with Neutron::Port resource fails

2023-01-17 Thread Brian Haley
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1691885

Title:
  Updating Nova::Server with Neutron::Port resource fails

Status in OpenStack Heat:
  Won't Fix
Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  A Nova::Server resource that was created with an implicit port cannot
  be updated.

  If I first create the following resource:
  # template1.yaml
  resources:
my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
ip_address: "192.168.24.10"

  And then try to run a stack update with a different ip_address:
  # template2.yaml
  resources:
my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
ip_address: "192.168.24.20"

  This fails with the following error:
  RetryError: resources.my_ironic_instance: RetryError[]

  I also tried assigning an external IP to the Nova::Server created in the 
template1.yaml, but that gave me the same error.
  # template3.yaml
  resources:
instance_port:
  type: OS::Neutron::Port
  properties:
network: ctlplane
fixed_ips:
  - subnet: "ctlplane-subnet"
ip_address: "192.168.24.20"

my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
port: {get_resource: instance_port}

  However, if I first create the Nova::Server resource with an external
  port specified (as in template3.yaml above), then I can update the
  port to a different IP address and Ironic/Neutron does the right thing
  (at least since the recent attach/detach VIF in Ironic code has
  merged). So it appears that you can update a port if the port was
  created externally, but not if the port was created as part of the
  Nova::Server resource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1691885/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696076] Re: tempest plugin needs an update for identity v3

2023-01-17 Thread Brian Haley
Fixed in commit f57580175402a1a57830da4a4d4305476c3dba18 in neutron-
tempest-plugin repo.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696076

Title:
  tempest plugin needs an update for identity v3

Status in neutron:
  Fix Released

Bug description:
  some tests are disabled for identity v3 only deployment
  after [1].

  [1] I8fffc50fd45e43f34ca591416924bcf29626b21f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1696076/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1702637] Re: API requests give different results from examples

2023-01-17 Thread Brian Haley
The api-ref seems to have been fixed, please re-open if you find another
issue.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1702637

Title:
  API requests give different results from examples

Status in neutron:
  Fix Released

Bug description:
  The response examples of the API requests under "Networks" in
  https://developer.openstack.org/api-
  ref/networking/v2/index.html?expanded=show-network-details-
  detail#networks are out of date.

  The results obtained from a List Networks GET request are as follows:
  http://paste.openstack.org/show/614262

  The identified problems are:
  1. The List Networks response examples contains some missing attributes. The 
same thing applies for Create Network, Show Network Details, and Update Network.
  2. Requests that contain "qos_policy_id" cause the error "Unrecognized 
attribute(s)" to be thrown.

  Thus the examples need to be updated with newer information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1702637/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1638684] Re: prevent_arp_spoofing in liberty prevents applications

2023-01-17 Thread Brian Haley
Please re-open if this is still an issue, but provide more information
on how you reproduced this.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1638684

Title:
  prevent_arp_spoofing in liberty prevents applications

Status in neutron:
  Invalid

Bug description:
  In Liberty the default option is: prevent_arp_spoofing = True
  If a VM is configured with subnet mask different to /24, e.g. a /20, then if 
an application is listening to same subnet (but not if it was /24), it won't 
reply to arp requests. 

  For example, 
  VM A interface is 10.1.2.3/20 and there is an application listening to 
10.1.3.4 on the same VM.
  VM B interface is 10.1.2.4/20.

  If you ping 10.1.3.4 from VM B, neutron will not forward arp reply to VM B. 
So, even if both A and B are on the same subnet, they cannot exchange arp 
requests, so they cannot connect to applications running. 
  Ping works if you ping from VM B the VM A in 10.1.2.3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1638684/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639033] Re: Traces seen in l3 agent with prefix delegation enabled subnet

2023-01-17 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639033

Title:
  Traces seen in l3 agent with prefix delegation enabled subnet

Status in neutron:
  Won't Fix

Bug description:
  2016-11-03 11:22:41.381 [00;32mDEBUG neutron.agent.linux.utils 
[[00;36m-[00;32m] [01;35m[00;32mRunning command (rootwrap daemon): ['ip', 
'netns', 'exec', 'qrouter-9f1e22fe-eac6-4032-87f2-315f028076c8', 'ip', '-6', 
'addr', 'add', '::8/64', 'scope', 'global', 'dev', 'qr-352a870a-7d'][00m 
[00;33mfrom (pid=28627) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:100[00m
  2016-11-03 11:22:41.444 [01;31mERROR neutron.agent.linux.utils 
[[00;36m-[01;31m] [01;35m[01;31mExit code: 2; Stdin: ; Stdout: ; Stderr: 
RTNETLINK answers: File exists
  [00m
  2016-11-03 11:22:41.445 [01;31mERROR neutron.agent.l3.router_info 
[[00;36m-[01;31m] [01;35m[01;31mExit code: 2; Stdin: ; Stdout: ; Stderr: 
RTNETLINK answers: File exists
  [00m
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info 
[01;35m[00mTraceback (most recent call last):
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/common/utils.py", line 216, in call
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   return func(*args, **kwargs)
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/agent/l3/router_info.py", line 1064, in 
process
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   self._process_internal_ports(agent.pd)
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/agent/l3/router_info.py", line 555, in 
_process_internal_ports
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   updated_cidrs)
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/agent/l3/router_info.py", line 394, in 
_internal_network_updated
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   self.ns_name)
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/agent/linux/interface.py", line 183, in 
add_ipv6_addr
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   device.addr.add(str(net), scope)
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 580, in add
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   self._as_root([net.version], tuple(args))
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 367, in _as_root
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   use_root_namespace=use_root_namespace)
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 96, in _as_root
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   log_fail_as_error=self.log_fail_as_error)
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 105, in _execute
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   log_fail_as_error=log_fail_as_error)
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
 File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 139, in execute
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m 
   raise RuntimeError(msg)
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info 
[01;35m[00mRuntimeError: Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK 
answers: File exists
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m
  [01;31m2016-11-03 11:22:41.445 TRACE neutron.agent.l3.router_info [01;35m[00m
  2016-11-03 11:22:41.448 [01;31mERROR neutron.agent.l3.agent [[00;36m-[01;31m] 
[01;35m[01;31mFailed to process compatible router: 
9f1e22fe-eac6-4032-87f2-315f028076c8[00m
  [01;31m2016-11-03 11:22:41.448 TRACE neutron.agent.l3.agent 
[01;35m[00mTraceback (most recent call last):
  [01;31m2016-11-03 11:22:41.448 TRACE neutron.agent.l3.agent [01;35m[00m  File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 508, in 
_process_router_update
  [01;31m2016-11-03 11:22:41.448 TRACE neutron.agent.l3.agent [01;35m[00m
self._process_router_if_compatible(router)
  [01;31m2016-11-03 11:22:41.448 TRACE neutron.agent.l3.agent [01;35m[00m  File 
"/o

[Yahoo-eng-team] [Bug 1664782] Re: iptables manager wrongly deletes other agents' rules

2023-01-17 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1664782

Title:
  iptables manager wrongly deletes other agents' rules

Status in neutron:
  Won't Fix

Bug description:
  Calico's Felix agent generates iptables chains that intentionally
  include rules that the Neutron iptables_manager code considers to be
  duplicates - as revealed by logs like these from the DHCP agent:

  2017-02-02 18:50:29.482 3376 WARNING neutron.agent.linux.iptables_manager [-] 
Duplicate iptables rule detected. This may indicate a bug in the iptables rule 
generation code. Line: -A felix-to-ebf1bc0b-ba -m mark --mark 
0x100/0x100 -m comment --comment "Profile accepted packet" -j RETURN
  2017-02-02 18:50:29.483 3376 WARNING neutron.agent.linux.iptables_manager [-] 
Duplicate iptables rule detected. This may indicate a bug in the iptables rule 
generation code. Line: -A felix-to-3d959cf9-36 -m mark --mark 
0x100/0x100 -m comment --comment "Profile accepted packet" -j RETURN
  2017-02-02 18:50:29.483 3376 WARNING neutron.agent.linux.iptables_manager [-] 
Duplicate iptables rule detected. This may indicate a bug in the iptables rule 
generation code. Line: -A felix-from-ebf1bc0b-ba -m mark --mark 
0x100/0x100 -m comment --comment "Profile accepted packet" -j RETURN
  2017-02-02 18:50:29.483 3376 WARNING neutron.agent.linux.iptables_manager [-] 
Duplicate iptables rule detected. This may indicate a bug in the iptables rule 
generation code. Line: -A felix-from-3d959cf9-36 -m mark --mark 
0x100/0x100 -m comment --comment "Profile accepted packet" -j RETURN

  IIUC, iptables_manager then reprograms iptables with these 'duplicates'
  removed, and thereby breaks Calico's iptables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1664782/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674632] Re: Open vSwitch: Self-service networks in Networking Guide

2023-01-17 Thread Brian Haley
The install guides in the neutron tree have been updated a number of
times, and I believe the current ones have much better instructions in
this area, so closing this bug.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674632

Title:
  Open vSwitch: Self-service networks in Networking Guide

Status in neutron:
  Invalid
Status in openstack-manuals:
  Won't Fix

Bug description:
  Hi,

  The steps for configuring openvswitch on Ubuntu for newton release is
  missing some vital information.When I followed the steps as they are,
  my instances are not getting floating ip. I think the docs are missing
  some important information like how to setup the OVS bridges for vlan
  networks. I remember that after the kilo release, the documentation on
  openvswitch mechanism in latest openstack releases  has many gaps. I
  think its time for the openstack community to act and take the
  openvswitch implementation as seriously as possible like in the case
  of linux bridge.

  Openvswitch has much importance these days in the ares like NFV (Network 
Functions Virtualization)
  in Telco cloud.  Please make sure that the use cases in the Documentation are 
validated before the release. 

  
  This bug tracker is for errors with the documentation, use the following as a 
template and remove or add fields as you see fit. Convert [ ] into [x] to check 
boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 0.9 on 2017-03-14 05:44
  SHA: bb783d2e176f963eb28eac45c8c5c7bb794128dc
  Source: 
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/networking-guide/source/deploy-ovs-selfservice.rst
  URL: 
https://docs.openstack.org/newton/networking-guide/deploy-ovs-selfservice.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1674632/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1682796] Re: release note entries included for wrong release

2023-01-17 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1682796

Title:
  release note entries included for wrong release

Status in neutron:
  Won't Fix
Status in reno:
  Fix Released

Bug description:
  ocata release note [1] has an entry for mitaka. [2]
  it seems the file has been updated in ocata cycle. [3]

  [1] https://docs.openstack.org/releasenotes/neutron/ocata.html

  [2] hyperv-neutron-agent-decomposition-ae6a052aeb48c6ac.yaml
  ("Hyper-V Neutron Agent has been fully decomposed from Neutron.")

  [3] Iec8494b40fed2d427c1edf4609f8b3dd8c770dce

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1682796/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635468] Re: iptables: fail to start ovs/linuxbridge agents on missing sysctl knobs

2023-01-17 Thread Brian Haley
commit 183c82b59a69a308aff13829a153460207aba8b6, which was after this
change, mentions checking for the iptables sysctl settings here. For
that reason and since this is quite old, marking this closed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635468

Title:
  iptables: fail to start ovs/linuxbridge agents on missing sysctl
  knobs

Status in neutron:
  Won't Fix
Status in openstack-manuals:
  Won't Fix

Bug description:
  https://review.openstack.org/371523
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit e83a44b96a8e3cd81b7cc684ac90486b283a3507
  Author: Ihar Hrachyshka 
  Date:   Thu Sep 15 21:48:10 2016 +

  iptables: fail to start ovs/linuxbridge agents on missing sysctl knobs
  
  For new kernels (3.18+), bridge module is split into two pieces: bridge
  and br_netfilter. The latter provides firewall support for bridged
  traffic, as well as the following sysctl knobs:
  
  * net.bridge.bridge-nf-call-arptables
  * net.bridge.bridge-nf-call-ip6tables
  * net.bridge.bridge-nf-call-iptables
  
  Before kernel 3.18, any brctl command was loading the 'bridge' module
  with the knobs, so at the moment where we reached iptables setup, they
  were always available.
  
  With new 3.18+ kernels, brctl still loads 'bridge' module, but not
  br_netfilter. So bridge existance no longer guarantees us knobs'
  presence. If we reach _enable_netfilter_for_bridges before the new
  module is loaded, then the code will fail, triggering agent resync. It
  will also fail to enable bridge firewalling on systems where it's
  disabled by default (examples of those systems are most if not all Red
  Hat/Fedora based systems), making security groups completely
  ineffective.
  
  Systems that don't override default settings for those knobs would work
  fine except for this exception in the log file and agent resync. This is
  because the first attempt to add a iptables rule using 'physdev' module
  (-m physdev) will trigger the kernel module loading. In theory, we could
  silently swallow missing knobs, and still operate correctly. But on
  second thought, it's quite fragile to rely on that implicit module
  loading. In the case where we can't detect whether firewall is enabled,
  it's better to fail than hope for the best.
  
  An alternative to the proposed path could be trying
  to fix broken deployment, meaning we would need to load the missing
  kernel module on agent startup. It's not even clear whether we can
  assume the operation would be available to us. Even with that, adding a
  rootwrap filter to allow loading code in the kernel sounds quite scary.
  If we would follow the path, we would also hit an issue of
  distinguishing between cases of built-in kernel module vs. modular one.
  A complexity that is probably beyond what Neutron should fix.
  
  The patch introduces a sanity check that would fail on missing
  configuration knobs.
  
  DocImpact: document the new deployment requirement in operations guide
  UpgradeImpact: deployers relying on agents fixing wrong sysctl defaults
 will need to make sure bridge firewalling is enabled.
 Also, the kernel module providing sysctl knobs must be
 loaded before starting the agent, otherwise it will fail
 to start.
  
  Depends-On: Id6bfd9595f0772a63d1096ef83ebbb6cd630fafd
  Change-Id: I9137ea017624ac92a05f73863b77f9ee4681bbe7
  Related-Bug: #1622914

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1635468/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1638015] Re: Deletion of DVR router with external gateway port is not using ML2 hooks

2023-01-17 Thread Brian Haley
Closing as it's quite old and there isn't much info here. Please re-open
if this is still an issue.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1638015

Title:
  Deletion of DVR router with external gateway port is not using ML2
  hooks

Status in neutron:
  Won't Fix

Bug description:
  Hello! I'm developing ML2 mechanism driver which needs to handle
  delete_port_(pre|post)commit. When I'm deleting DVR router, I'm not
  getting delete_port_... calls in my mech driver, because
  delete_floatingip_agent_gateway_port will issue
  self._core_plugin.ipam.delete_port instead of
  self._core_plugin.delete_port. This is crucial in case when external
  network is handled by OVS agent instead of L3 agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1638015/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632537] Re: l3 agent print the ERROR log in l3 log file continuously , finally fill file space, leading to crash the l3-agent service

2023-01-17 Thread Brian Haley
Without more info I'm not sure what we can do to reduce these messages,
or if there was just an underlying bug in the l3-agent that triggered
this. Since it's been a number of years and we haven't this this issue
reported by others, I am going to close it, but please re-open if you
have more data on how to reproduce it.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632537

Title:
  l3 agent print the ERROR log in l3 log file continuously ,finally fill
  file space,leading to crash the l3-agent service

Status in neutron:
  Invalid
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
[req-5d499217-05b6-4a56-a3b7-5681adb53d6c - d2b95803757641b6bc55f6309c12c6e9 - 
- -] Failed to process compatible router 'da82aeb4-07a4-45ca-ae7a-570aec69df29'
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 501, in 
_process_router_update
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 438, in 
_process_router_if_compatible
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self._process_added_router(router)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 446, in 
_process_added_router
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
ri.process(self)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py", line 
488, in process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
super(DvrLocalRouter, self).process(agent)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_router_base.py", line 
30, in process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
super(DvrRouterBase, self).process(agent)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 386, in 
process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
super(HaRouter, self).process(agent)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 385, in call
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent self.logger(e)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self.force_reraise()
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 382, in call
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent return 
func(*args, **kwargs)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 964, 
in process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self.process_address_scope()
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_edge_router.py", line 
239, in process_address_scope
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self.snat_iptables_manager, ports_scopemark)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent self.gen.next()
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", 
line 461, in defer_apply
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent raise 
n_exc.IpTablesApplyException(msg)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
IpTablesApplyException: Failure applying iptables rules

  this ERROR information will fill l3-agent log file continuously until
  solving the problem ,it will fill the file space.

To manage notifications about this bug go to:
https://bug

[Yahoo-eng-team] [Bug 2003121] [NEW] machine-id is not reset when instance-id changes

2023-01-17 Thread Robie Basak
Public bug reported:

As discussed in #ubuntu-server just now, it's expected that cloud-init
will ensure that machine-id is not carried over when a VM is cloned and
this is detectable by an instance-id change.

This would align behaviour with ssh host key regeneration behaviour.

Actual behaviour: currently if a VM is cloned and the instance-id
changes, /etc/machine-id remains the same.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/2003121

Title:
  machine-id is not reset when instance-id changes

Status in cloud-init:
  New

Bug description:
  As discussed in #ubuntu-server just now, it's expected that cloud-init
  will ensure that machine-id is not carried over when a VM is cloned
  and this is detectable by an instance-id change.

  This would align behaviour with ssh host key regeneration behaviour.

  Actual behaviour: currently if a VM is cloned and the instance-id
  changes, /etc/machine-id remains the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/2003121/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2002993] Re: [ovn] default gateway populated with empty dst-ip when gateway explicitly disabled on subnet

2023-01-17 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/870633
Committed: 
https://opendev.org/openstack/neutron/commit/463c3df4cff0a5764beef3f83a7f3bf26d688470
Submitter: "Zuul (22348)"
Branch:master

commit 463c3df4cff0a5764beef3f83a7f3bf26d688470
Author: Frode Nordahl 
Date:   Mon Jan 16 17:20:13 2023 +0100

[ovn] Do not create empty default route when empty gateway_ip

Before this patch, if a OVN router was associated with an
external network whose subnet had an empty `gateway_ip`, a
default route with an empty dst-ip is created in the OVN
database.

As documented in the Neutron API [0], it is allowed to create
a subnet without a gateway_ip.

0: 
https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-subnet-detail#create-subnet

Closes-Bug: #2002993
Signed-off-by: Frode Nordahl 
Change-Id: Ica76e0821d753af883444d2a449283e9e69ba03f


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2002993

Title:
  [ovn] default gateway populated with empty dst-ip when gateway
  explicitly disabled on subnet

Status in neutron:
  Fix Released

Bug description:
  As documented in the Neutron API [0], it is allowed to create a subnet
  without a gateway_ip.

  This may be useful if you want to control the routes using the
  extraroutes API [1].

  At present the Neutron OVN driver will create a default route with an
  empty dst-ip in the OVN database in case of the gateway_ip not being
  set.

  I believe this is a bug and that the Neutron OVN driver should omit
  creating the route when no gateway_ip is set.

  0: 
https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-subnet-detail#create-subnet
  1: 
https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-subnet-detail,add-extra-routes-to-router-detail,remove-interface-from-router-detail#add-extra-routes-to-router

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2002993/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2003095] [NEW] [RFE] Provide Port Binding Information for Manila Share Server Live Migration

2023-01-17 Thread Maurice Escher
Public bug reported:

Hi,

similar to Nova, where this feature is described by
https://review.opendev.org/c/openstack/neutron-specs/+/309416/ and
implemented around https://bugs.launchpad.net/neutron/+bug/1580880,
there is a Share Server Live Migration feature in Manila (see
https://review.opendev.org/c/openstack/manila-specs/+/735970), that
would benefit of this port binding extension.

If the migration target is in a different network segment, Manila needs
to be able to do the port binding before the actual migration starts.
This is relevant for the neutron bind driver in manila, that was added
to support hierarchical port binding in
https://review.opendev.org/c/openstack/manila-specs/+/315985

Best regards,
Maurice

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2003095

Title:
  [RFE] Provide Port Binding Information for Manila Share Server Live
  Migration

Status in neutron:
  New

Bug description:
  Hi,

  similar to Nova, where this feature is described by
  https://review.opendev.org/c/openstack/neutron-specs/+/309416/ and
  implemented around https://bugs.launchpad.net/neutron/+bug/1580880,
  there is a Share Server Live Migration feature in Manila (see
  https://review.opendev.org/c/openstack/manila-specs/+/735970), that
  would benefit of this port binding extension.

  If the migration target is in a different network segment, Manila
  needs to be able to do the port binding before the actual migration
  starts. This is relevant for the neutron bind driver in manila, that
  was added to support hierarchical port binding in
  https://review.opendev.org/c/openstack/manila-specs/+/315985

  Best regards,
  Maurice

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2003095/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2002068] Re: Can not handle authentication request for 2 credentials

2023-01-17 Thread Sylvain Bauza
Looks like nova-compute service is unable to talk to the libvirt API.
Definitely a config issue, closing this bug.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2002068

Title:
  Can not handle authentication request for 2 credentials

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  My python environment: python 3.8.10 and use venv to run.

  When I run "nova-compute" I got the error message like that. Am I
  forget some thing?

  nova.exception.InternalError: Can not handle authentication request
  for 2 credentials

  
  >>> FULL

  2023-01-06 11:26:33.694 388587 WARNING oslo_messaging.rpc.client [None 
req-6e6d4628-a393-4d04-8958-dbfcfda36c25 - - - - - -] Using RPCClient manually 
to instantiate client. Please use get_rpc_client to obtain an RPC client 
instance.
  2023-01-06 11:26:33.695 388587 WARNING oslo_messaging.rpc.client [None 
req-6e6d4628-a393-4d04-8958-dbfcfda36c25 - - - - - -] Using RPCClient manually 
to instantiate client. Please use get_rpc_client to obtain an RPC client 
instance.
  2023-01-06 11:26:33.695 388587 WARNING oslo_messaging.rpc.client [None 
req-6e6d4628-a393-4d04-8958-dbfcfda36c25 - - - - - -] Using RPCClient manually 
to instantiate client. Please use get_rpc_client to obtain an RPC client 
instance.
  2023-01-06 11:26:33.696 388587 INFO nova.virt.driver [None 
req-6e6d4628-a393-4d04-8958-dbfcfda36c25 - - - - - -] Loading compute driver 
'libvirt.LibvirtDriver'
  2023-01-06 11:26:33.778 388587 INFO nova.compute.provider_config [None 
req-6e6d4628-a393-4d04-8958-dbfcfda36c25 - - - - - -] No provider configs found 
in /etc/nova/provider_config/. If files are present, ensure the Nova process 
has access.
  2023-01-06 11:26:33.799 388587 WARNING oslo_config.cfg [None 
req-6e6d4628-a393-4d04-8958-dbfcfda36c25 - - - - - -] Deprecated: Option 
"api_servers" from group "glance" is deprecated for removal (
  Support for image service configuration via standard keystoneauth1 Adapter
  options was added in the 17.0.0 Queens release. The api_servers option was
  retained temporarily to allow consumers time to cut over to a real load
  balancing solution.
  ).  Its value may be silently ignored in the future.
  2023-01-06 11:26:33.815 388587 INFO nova.service [-] Starting compute node 
(version 26.1.0)
  2023-01-06 11:26:33.835 388587 CRITICAL nova [-] Unhandled error: 
nova.exception.InternalError: Can not handle authentication request for 2 
credentials
  2023-01-06 11:26:33.835 388587 ERROR nova Traceback (most recent call last):
  2023-01-06 11:26:33.835 388587 ERROR nova   File 
"/opt/nova/venv/lib/python3.8/site-packages/nova/virt/libvirt/host.py", line 
338, in _connect_auth_cb
  2023-01-06 11:26:33.835 388587 ERROR nova raise exception.InternalError(
  2023-01-06 11:26:33.835 388587 ERROR nova nova.exception.InternalError: Can 
not handle authentication request for 2 credentials
  2023-01-06 11:26:33.835 388587 ERROR nova
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host [-] Connection to 
libvirt failed: authentication failed: Failed to collect auth credentials: 
libvirt.libvirtError: authentication failed: Failed to collect auth credentials
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host Traceback (most 
recent call last):
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host   File 
"/opt/nova/venv/lib/python3.8/site-packages/nova/virt/libvirt/host.py", line 
588, in get_connection
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host conn = 
self._get_connection()
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host   File 
"/opt/nova/venv/lib/python3.8/site-packages/nova/virt/libvirt/host.py", line 
568, in _get_connection
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host 
self._queue_conn_event_handler(
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host   File 
"/opt/nova/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, 
in __exit__
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host 
self.force_reraise()
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host   File 
"/opt/nova/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, 
in force_reraise
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host raise 
self.value
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host   File 
"/opt/nova/venv/lib/python3.8/site-packages/nova/virt/libvirt/host.py", line 
560, in _get_connection
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host 
self._wrapped_conn = self._get_new_connection()
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvirt.host   File 
"/opt/nova/venv/lib/python3.8/site-packages/nova/virt/libvirt/host.py", line 
504, in _get_new_connection
  2023-01-06 11:26:33.840 388587 ERROR nova.virt.libvi

[Yahoo-eng-team] [Bug 1608176] Re: SRIOV-port creation : its possible to create more direct ports than available vif.

2023-01-17 Thread Rodolfo Alonso
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608176

Title:
  SRIOV-port creation : its possible to create more direct ports than
  available vif.

Status in neutron:
  Invalid

Bug description:
  When creating sriov ports we can create more ports than vif's are avalible 
  for example we have 4 vifs and created more than 4 direct ports . 
  [root@compute1 ~(keystone_admin)]# neutron port-list
  
+--+--+---+-
  | id   | name | mac_address   | fixed_ips 
  |
  | 22e0ae33-1f9e-47f5-9b20-2a3277b15959 |  | fa:16:3e:82:52:ea | 
{"subnet_id": "212d47a3-93b7-400d-8322-84717c780e21", "ip_address": 
"192.168.1.13"} |
  | 2f6c4371-fffe-4834-a6aa-725ae72f4d02 |  | fa:16:3e:9c:e9:bb | 
{"subnet_id": "212d47a3-93b7-400d-8322-84717c780e21", "ip_address": 
"192.168.1.4"}  |
  | 98b62507-e21b-49e6-880e-34fa14f6b986 |  | fa:16:3e:8e:73:40 | 
{"subnet_id": "212d47a3-93b7-400d-8322-84717c780e21", "ip_address": 
"192.168.1.5"}  |
  | a9a28d02-4342-4575-9f9a-8ae70ab8bd2a |  | fa:16:3e:7d:3f:04 | 
{"subnet_id": "212d47a3-93b7-400d-8322-84717c780e21", "ip_address": 
"192.168.1.14"} |
  | b466da32-fff4-46d8-a134-d861105e966f |  | fa:16:3e:3c:81:a5 | 
{"subnet_id": "212d47a3-93b7-400d-8322-84717c780e21", "ip_address": 
"192.168.1.6"}  |
  | d3601592-0133-4c65-ab85-cc142325ce29 |  | fa:16:3e:c1:81:01 | 
{"subnet_id": "212d47a3-93b7-400d-8322-84717c780e21", "ip_address": 
"192.168.1.3"}  |
  | d67e2fa6-6805-4baa-882e-9cf54714ba20 |  | fa:16:3e:32:26:f5 | 
{"subnet_id": "212d47a3-93b7-400d-8322-84717c780e21", "ip_address": 
"192.168.1.2"}  |
  | e5b5290d-1824-4bd9-aecb-81032b38ac60 |  | fa:16:3e:58:8f:c9 | 
{"subnet_id": "212d47a3-93b7-400d-8322-84717c780e21", "ip_address": 
"192.168.1.7"}  |
  
+--+--+---+-
  [root@compute1 ~(keystone_admin)]# ip link show 
  4: enp5s0f1:  mtu 1500 qdisc mq master 
ovs-system state UP mode DEFAULT qlen 1000
  link/ether a0:36:9f:7f:28:ba brd ff:ff:ff:ff:ff:ff
  vf 0 MAC fa:16:3e:3c:81:a5, vlan 208, spoof checking on, link-state auto
  vf 1 MAC fa:16:3e:9c:e9:bb, vlan 208, spoof checking on, link-state auto
  vf 2 MAC fa:16:3e:8e:73:40, vlan 208, spoof checking on, link-state auto
  vf 3 MAC fa:16:3e:c1:81:01, vlan 208, spoof checking on, link-state auto

  [root@controller1 ~]# rpm -qa |grep neutron 
  python-neutron-7.0.4-3.el7ost.noarch
  openstack-neutron-lbaas-7.0.0-2.el7ost.noarch
  openstack-neutron-openvswitch-7.0.4-3.el7ost.noarch
  python-neutron-fwaas-7.0.0-1.el7ost.noarch
  openstack-neutron-fwaas-7.0.0-1.el7ost.noarch
  python-neutron-lbaas-7.0.0-2.el7ost.noarch
  openstack-neutron-ml2-7.0.4-3.el7ost.noarch
  python-neutronclient-3.1.0-1.el7ost.noarch
  openstack-neutron-common-7.0.4-3.el7ost.noarch
  openstack-neutron-7.0.4-3.el7ost.noarch
   
  osp 8

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1608176/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619303] Re: Switch fullstack neutronclient to openstackclient

2023-01-17 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619303

Title:
  Switch fullstack neutronclient to openstackclient

Status in neutron:
  Won't Fix

Bug description:
  As we're moving from neutronclient towards openstackclient, some
  features like trunk are not even implemented to neutronclient. We
  should switch in fullstack to openstackclient in order to be able to
  test such features.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1619303/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631647] Re: Network downtime during live migration through routers

2023-01-17 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631647

Title:
  Network downtime during live migration through routers

Status in neutron:
  Won't Fix

Bug description:
  neutron/master (close to stable/newton)
  VXLAN networks with simple network node (not DVR)

  There is network down time of several seconds during a live migration.
  The amount of time depends on when the VM resumes on the target host
  versus when the migration ‘completes’.

  When a live migration occurs, there is a point in its life cycle where
  it pauses on the source and starts up (or resumes) on the target.  At
  that point, the migration isn’t complete, the system has determined it
  is now best to be running on the target.  This of course varies per
  hypervisor, but that is the general flow for most hypervisors.

  So during the migration the port goes through a few states.
  1) Pre migration, its tied solely to the source host.
  2) During migration, its tied to the source host.  The port profile has a 
‘migrating_to’ attribute that identifies the target host
  3) Post migration, the port is tied solely to the target host.

  
  The OVS agent handles the migration well.  It detects the port, sees the 
UUID, and treats the port properly.  But things like the router don’t seem to 
handle it properly, at least in my testing.

  It seems only once the VM hits step 3 (post migration, where nova
  updates the port to be on the target host solely) does the routing
  information get updated in the router.

  In fact, its kinda interesting.  I’ve been running a constant ping during the 
live migration through the router and watching it on both sides with tcpdump.  
When it resumes on the target, but live migration is not completed the 
following happens:
   - Ping request goes out from target server
   - Goes through out the router
   - Comes back into the router
   - Gets sent to the source server

  I’m not sure if this is somehow specific to vxlan.  I haven’t had a
  chance to try Geneve yet.

  This could impact projects like Watcher which will be using the live-
  migration to constantly optimize the system.  But that could be
  undesirable to optimize because it would introduce down time on the
  workloads being moved around.

  If the time between a VM resume and live migration complete is
  minimal, then the impact can be quite small (couple seconds).  If KVM
  uses post-copy, it should be susceptible to it.
  http://wiki.qemu.org/Features/PostCopyLiveMigration

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631647/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640029] Re: [stable/newton] Deleting heat stack failed due to error "QueuePool limit of size 50 overflow 50 reached, connection timed out, timeout 30"

2023-01-17 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1640029

Title:
  [stable/newton] Deleting heat stack failed due to error "QueuePool
  limit of size 50 overflow 50 reached, connection timed out, timeout
  30"

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  In my stable/newton setup running on a VMware NSX platform, I brought up 5 
heat stacks each having 100 nova instances in the same /16 network.
  Deleting those heat stacks failed due to the below error.

  "
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions 
TimeoutError: QueuePool limit of size 50 overflow 50 reached, connection timed 
out, timeout 30
  "

  Because of this error, out of 500 instances, deletion of about 67 instances 
got failed.
  With default parameters in neutron.conf, I'm getting the below neutron error.

  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
[req-a0022887-cc01-4f2e-980d-490136524363 admin -] delete failed: No details.
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  .
  .
  .
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2039, 
in contextual_connect
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
self._wrap_pool_connect(self.pool.connect, None),
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2074, 
in _wrap_pool_connect
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource return fn()
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 376, in 
connect
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource return 
_ConnectionFairy._checkout(self)
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 713, in 
_checkout
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource fairy = 
_ConnectionRecord.checkout(pool)
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 480, in 
checkout
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource rec = 
pool._do_get()
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 1053, in 
_do_get
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource (self.size(), 
self.overflow(), self._timeout))
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource TimeoutError: 
QueuePool limit of size 10 overflow 20 reached, connection timed out, timeout 10
  2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource

  After changing the below parameters in /etc/neutron/neutron.conf and

  max_pool_size = 50
  retry_interval = 10
  max_overflow = 50
  pool_max_size = 50
  pool_max_overflow = 50
  pool_timeout = 30

  below parameters in nova.conf and restarted the services and re-
  executed the testcase.Still deleting heat stack is failing with the
  below error

  max_pool_size = 50
  max_overflow = 50

  n-api.log:

  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions 
[req-db3d6d66-9508-4eb8-be65-964f05ff50f8 admin admin] Unexpected exception in 
API method
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 338, in wrapped
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  .
  .
  .
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 480, in 
checkout
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions rec = 
pool._do_get()
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 1053, in 
_do_get
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions 
(self.size(), self.overflow(), self._timeout))
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions 
TimeoutError: QueuePool limit of size 50 overflow 50 reached, connection timed 
out, timeout 30
  2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions
  2016-11-03 17:27:34.148 2399 INFO nova.api.openstack.wsgi 
[req-db3d6d66-9508-4eb8-be65-964f05ff50f8 admin admin] HTTP exception thrown: 
Unexpecte

[Yahoo-eng-team] [Bug 1662821] Re: provider bridge is not created in controller node/newton/ubuntu 16.04

2023-01-17 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662821

Title:
  provider bridge is not created in controller node/newton/ubuntu 16.04

Status in neutron:
  Won't Fix

Bug description:
  Hello,
  I am running newton openstack in ubuntu 16.04 platform. I have one controller 
node and one compute node. I created network in provider network as shown in 
http://docs.openstack.org/newton/install-guide-ubuntu/launch-instance-provider.html

  Howerver, I didnt see any provider bridge in my controller. 
  My ifconfig says: there is only one tap interface. I have one netns 
(qdhcp-d6aee39b-8a97-4a69-98c7-9d94093f54af)

  I ping from qdhcp namespace with the following command:

  sudo ip netns exec qdhcp-d6aee39b-8a97-4a69-98c7-9d94093f54af ping 
203.0.113.111
  That is IP address of VM. 

  The message " Destination Host Unreachable" is shown.

  I tried to debug it. I found provider bridge is not created in
  controller node. tcpdump shows ARP requests. But provider interface
  does not show anything.

  In contrast, compute node has provider bridge. I am using  Linux
  bridge agent in controller and compute. Both are running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1662821/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667231] Re: ovs-agent error while processing VIF ports on compute

2023-01-17 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667231

Title:
  ovs-agent error while processing VIF ports on compute

Status in neutron:
  Won't Fix

Bug description:
  I am using the M code on Centos7.2

  and recently found the following error in compute node
  /var/log/neutron/openvswitch-agent.log

  it seems that for each serveral weeks ,there will be such an error.

  
  .ovs_neutron_agent [req-ae1d74ba-967d-48f2-8ec1-2dcfe7f20991 - - - - -] Error 
while processing VIF ports
  .ovs_neutron_agent Traceback (most recent call last):
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2037, in rpc_loop
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1651, in process_network_ports
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
292, in setup_port_filters
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
147, in decorated_function
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
172, in prepare_devices_filter
  .ovs_neutron_agent   File "/usr/lib64/python2.7/contextlib.py", line 24, in 
__exit__
  .ovs_neutron_agent self.gen.next()
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/firewall.py", line 128, in 
defer_apply
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py", 
line 833, in filter_defer_apply_off
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py", 
line 818, in _remove_conntrack_entries_from_sg_updates
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py", 
line 775, in _clean_deleted_sg_rule_conntrack_entries
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_conntrack.py", line 
78, in delete_conntrack_state_by_rule
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_conntrack.py", line 
72, in _delete_conntrack_state
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 116, in 
execute
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 102, in 
execute_rootwrap_daemon
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 128, in execute
  .ovs_neutron_agent res = proxy.run_one_command(cmd, stdin)
  .ovs_neutron_agent   File "", line 2, in run_one_command
  .ovs_neutron_agent   File "/usr/lib64/python2.7/multiprocessing/managers.py", 
line 773, in _callmethod
  .ovs_neutron_agent raise convert_to_error(kind, result)
  .ovs_neutron_agent RemoteError: 
  .ovs_neutron_agent 
---
  .ovs_neutron_agent Unserializable message: ('#ERROR', 
FilterMatchNotExecutable())
  .ovs_neutron_agent 
---

  
  [root@compute1 ~]# rpm -qa|grep openvswitch
  openvswitch-2.5.0-2.el7.x86_64
  openstack-neutron-openvswitch-8.1.2-el7.centos.noarch
  python-openvswitch-2.5.0-2.el7.noarch
  [root@compute1 ~]# 

  -

  anyone run into this

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667231/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1672610] Re: ofctl request Datapath Invalid errors

2023-01-17 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

The other related bugs are fixed and released.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672610

Title:
  ofctl request Datapath Invalid errors

Status in neutron:
  Won't Fix

Bug description:
  openflow connection is sometimes lost and the subsequent openflow command 
fails. ~70 occurences in a week in logstash search.
  Retrying should save such cases.

  example:

  http://logs.openstack.org/98/436798/25/check/gate-tempest-dsvm-
  neutron-full-centos-7-nv/3d5e54b/logs/screen-neutron-
  agent.txt.gz#_2017-03-13_06_54_30_891

  2017-03-13 06:54:30.896 16978 ERROR OfctlService [-] unknown dpid 
143366513125697
  2017-03-13 06:54:30.897 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[req-a1036712-2598-48a8-8a38-8e7eacc0ed8f None None] ofctl request 
version=None,msg_type=None,msg_len=None,xid=None,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1)
 error Datapath Invalid 143366513125697
  2017-03-13 06:54:30.898 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int 
[req-a1036712-2598-48a8-8a38-8e7eacc0ed8f None None] Failed to communicate with 
the switch
  2017-03-13 06:54:30.898 16978 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback 
(most recent call last):
  2017-03-13 06:54:30.898 16978 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py",
 line 52, in check_canary_table
  2017-03-13 06:54:30.898 16978 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int flows 
= self.dump_flows(constants.CANARY_TABLE)
  2017-03-13 06:54:30.898 16978 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py",
 line 131, in dump_flows
  2017-03-13 06:54:30.898 16978 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int 
reply_multi=True)
  2017-03-13 06:54:30.898 16978 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py",
 line 79, in _send_msg
  2017-03-13 06:54:30.898 16978 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int raise 
RuntimeError(m)
  2017-03-13 06:54:30.898 16978 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int 
RuntimeError: ofctl request 
version=None,msg_type=None,msg_len=None,xid=None,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1)
 error Datapath Invalid 143366513125697
  2017-03-13 06:54:30.898 16978 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int
  2017-03-13 06:54:30.907 WARNING 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-a1036712-2598-48a8-8a38-8e7eacc0ed8f None None] OVS is dead. 
OVSNeutronAgent will keep running and checking OVS status periodically.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1672610/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1991000] Re: [tripleo] Provide a tag to the container that will be used to kill it

2023-01-17 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/865018
Committed: 
https://opendev.org/openstack/neutron/commit/3d575f8bd066ce2eb46353a49a8c6850ba9e4387
Submitter: "Zuul (22348)"
Branch:master

commit 3d575f8bd066ce2eb46353a49a8c6850ba9e4387
Author: Rodolfo Alonso Hernandez 
Date:   Mon Nov 14 05:26:52 2022 +0100

Add an env variable "PROCESS_TAG" in ``ProcessManager``

Added a new environment variable "PROCESS_TAG" in ``ProcessManager``.
This environment variable could be read by the process executed and
is unique per process. This environment variable can be used to tag
the running process; for example, a container manager can use this
tag to mark the a container.

This feature will be used by TripleO to identify the running containers
with a unique tag. This will make the "kill" process easier; it will
be needed just to find the container running with this tag.

Closes-Bug: #1991000
Change-Id: I234c661720a8b1ceadb5333181890806f79dc21a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1991000

Title:
  [tripleo] Provide a tag to the container that will be used to kill it

Status in neutron:
  Fix Released
Status in tripleo:
  New

Bug description:
  TripleO uses containers to spawn the different processes. Some of these 
processes (some Neutron agents) also spawn long live child processes that run 
in parallel to the main one. This is the list of them:
  * dibbler
  * dnsmasq
  * haproxy
  * keepalived
  * neutron-keepalived-state-change
  * radvd

  TripleO uses a set of scripts that replaces those processes. When
  Neutron call a script, it actually starts a sidecar container running
  the needed process. When the agent needs to stop the process, there is
  a kill script [1] that replaces the "kill" CLI call. This kill script
  uses the PID of the process to find the container ID and then to send
  the needed signal (hup, term, kill).

  To find the container ID, the script reads "/proc/$PID/cgroup" and
  parses the output. This is a weak method that depends on the output of
  this file.

  This bug proposes to spawn the containers with a label:
$ podman run --label neutron_tag="container_UUID"

  This container UUID could be the "ProcessManager.uuid" itself. This UUID will 
be unique and will identify the container. If passed when created and killed, 
the kill script can use this UUID to find this specific container:
$ podman ps --filter "label=neutron_tag=container_UUID"

  [1]https://github.com/openstack/tripleo-heat-
  templates/blob/master/deployment/neutron/kill-script

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1991000/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2003022] Re: [FT] Error in neutron.tests.functional.agent.l3.test_keepalived_state_change.TestMonitorDaemon.test_read_queue_change_state

2023-01-17 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/870679
Committed: 
https://opendev.org/openstack/neutron/commit/f28e4165993f0c226bd5af54ad6d8c6bb48bff69
Submitter: "Zuul (22348)"
Branch:master

commit f28e4165993f0c226bd5af54ad6d8c6bb48bff69
Author: Miguel Lavalle 
Date:   Mon Jan 16 16:41:20 2023 -0600

Add 3 secs to wait for keepalived state change

We add 3 seconds to the keepalived state change to avoid timeout
failures in test_read_queue_change_state.

Change-Id: Ic54bd6f699b91b53f22c93c972d8def6c99d0eb7
Closes-Bug: #2003022


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2003022

Title:
  [FT] Error in
  
neutron.tests.functional.agent.l3.test_keepalived_state_change.TestMonitorDaemon.test_read_queue_change_state

Status in neutron:
  Fix Released

Bug description:
  Log:
  
https://c50fdb7f046159692f4d-3059cf1890ea1358c70d952067d56657.ssl.cf2.rackcdn.com/869388/1/check/neutron-
  functional-with-uwsgi/1e50279/testr_results.html

  Snippet: https://paste.openstack.org/show/b7bQPwYdNJZMqTIBwVOU/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2003022/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2003063] [NEW] tempest slow jobs fails

2023-01-17 Thread Slawek Kaplonski
Public bug reported:

Since 13.01.2023 both tempest-slow jobs: neutron-ovn-tempest-slow and
neutron-ovs-tempest-slow are failing with always the same 2 tests:

tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_port_security_macspoofing_port
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2003063

Title:
  tempest slow jobs fails

Status in neutron:
  Confirmed

Bug description:
  Since 13.01.2023 both tempest-slow jobs: neutron-ovn-tempest-slow and
  neutron-ovs-tempest-slow are failing with always the same 2 tests:

  
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_port_security_macspoofing_port
  
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2003063/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940425] Re: test_live_migration_with_trunk tempest test fails due to port remains in down state

2023-01-17 Thread Slawek Kaplonski
New occurence of the issue
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_558/867769/11/gate/neutron-
ovs-tempest-multinode-full/558cfa3/testr_results.html

Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in 
wrapper
return func(*func_args, **func_kwargs)
  File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
return f(*func_args, **func_kwargs)
  File "/opt/stack/tempest/tempest/api/compute/admin/test_live_migration.py", 
line 286, in test_live_migration_with_trunk
self.assertTrue(
  File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940425

Title:
  test_live_migration_with_trunk tempest test fails due to port remains
  in down state

Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  Invalid
Status in os-vif:
  Incomplete

Bug description:
  Example failure is in [1]:

  2021-08-18 10:40:52,334 124842 DEBUG[tempest.lib.common.utils.test_utils] 
Call _is_port_status_active returns false in 60.00 seconds
  }}}

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in 
wrapper
  return func(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
  return f(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/api/compute/admin/test_live_migration.py", 
line 281, in test_live_migration_with_trunk
  self.assertTrue(
File "/usr/lib/python3.8/unittest/case.py", line 765, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

  Please note that a similar bug was reported and fixed previously
  https://bugs.launchpad.net/tempest/+bug/1924258 It seems that fix did
  not fully solved the issue.

  It is not super frequent I saw 4 occasions in the last 30 days [2].

  [1] 
https://zuul.opendev.org/t/openstack/build/fdbda223dc10456db58f922b6435f680/logs
  [2] https://paste.opendev.org/show/808166/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940425/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1964313] Re: Functional tests fails due to missing namespace

2023-01-17 Thread Slawek Kaplonski
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1964313

Title:
  Functional tests fails due to missing namespace

Status in neutron:
  Invalid

Bug description:
  It happened at least twice recently that functional tests failed due
  to missing qrouter- namespace. Examples:

  https://zuul.opendev.org/t/openstack/build/f8510ec1f1ca4329a5bb28fb1a38614c
  https://zuul.openstack.org/build/31bd6bca81f34c6c9af385daf6f64ee8

  Stacktrace:

  2022-03-02 11:34:08.097415 | controller | Captured traceback:
  2022-03-02 11:34:08.097427 | controller | ~~~
  2022-03-02 11:34:08.097439 | controller | Traceback (most recent call 
last):
  2022-03-02 11:34:08.097451 | controller |
  2022-03-02 11:34:08.097462 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
718, in wait_until_true
  2022-03-02 11:34:08.097475 | controller | eventlet.sleep(sleep)
  2022-03-02 11:34:08.097486 | controller |
  2022-03-02 11:34:08.097507 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  2022-03-02 11:34:08.097520 | controller | hub.switch()
  2022-03-02 11:34:08.097531 | controller |
  2022-03-02 11:34:08.097542 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
  2022-03-02 11:34:08.097554 | controller | return self.greenlet.switch()
  2022-03-02 11:34:08.097565 | controller |
  2022-03-02 11:34:08.097575 | controller | eventlet.timeout.Timeout: 60 
seconds
  2022-03-02 11:34:08.097586 | controller |
  2022-03-02 11:34:08.097597 | controller |
  2022-03-02 11:34:08.097607 | controller | During handling of the above 
exception, another exception occurred:
  2022-03-02 11:34:08.097619 | controller |
  2022-03-02 11:34:08.097629 | controller |
  2022-03-02 11:34:08.097640 | controller | Traceback (most recent call 
last):
  2022-03-02 11:34:08.097650 | controller |
  2022-03-02 11:34:08.097661 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", line 183, 
in func
  2022-03-02 11:34:08.097672 | controller | return f(self, *args, **kwargs)
  2022-03-02 11:34:08.097683 | controller |
  2022-03-02 11:34:08.097693 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", line 183, 
in func
  2022-03-02 11:34:08.097704 | controller | return f(self, *args, **kwargs)
  2022-03-02 11:34:08.097714 | controller |
  2022-03-02 11:34:08.097725 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_ha_router.py",
 line 88, in test_ha_router_lifecycle
  2022-03-02 11:34:08.097736 | controller | router_info = 
self._router_lifecycle(enable_ha=True)
  2022-03-02 11:34:08.097747 | controller |
  2022-03-02 11:34:08.097758 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/framework.py",
 line 336, in _router_lifecycle
  2022-03-02 11:34:08.097769 | controller | 
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
  2022-03-02 11:34:08.097780 | controller |
  2022-03-02 11:34:08.097791 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
723, in wait_until_true
  2022-03-02 11:34:08.097807 | controller | raise WaitTimeout(_("Timed out 
after %d seconds") % timeout)
  2022-03-02 11:34:08.097819 | controller |
  2022-03-02 11:34:08.097830 | controller | 
neutron.common.utils.WaitTimeout: Timed out after 60 seconds
  2022-03-02 11:34:08.097840 | controller |
  2022-03-02 11:34:08.097851 | controller |
  2022-03-02 11:34:08.097861 | controller | Captured stderr:
  2022-03-02 11:34:08.097872 | controller | 
  2022-03-02 11:34:08.097882 | controller | Traceback (most recent call 
last):
  2022-03-02 11:34:08.097899 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 476, in fire_timers
  2022-03-02 11:34:08.097911 | controller | timer()
  2022-03-02 11:34:08.097923 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/timer.py",
 line 59, in __call__
  2022-03-02 11:34:08.097934 | controller | cb(*args, **kw)
  2022-03-02 11:34:08.097945 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 221, in main
  2022-03-02 11:34:08.097956 | controller | result = function(*args, 
**kwargs)
  2022-03-02 11:34:0

[Yahoo-eng-team] [Bug 1982720] Re: stable/train: neutron-grenade job consistently fails in reqirements repo

2023-01-17 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1982720

Title:
  stable/train: neutron-grenade job consistently fails in reqirements
  repo

Status in neutron:
  Fix Released

Bug description:
  Currently the neutron-grenade job in stable/train branch of
  requirements repo consistently fails.

  Example:
  https://zuul.opendev.org/t/openstack/build/1a4f23e400a3491b88b161e50878753a

  Looking at the job-output.txt, it seems installation is being stuck at some 
point
  but I could not find out the actual cause because of no logs captured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1982720/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1986682] Re: [stable/stein][stable/rocky}[stable/queens] CI jobs failing

2023-01-17 Thread Slawek Kaplonski
All mentioned branches are EOL now, so I'm marking it as Won't fix due
to EOL.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1986682

Title:
  [stable/stein][stable/rocky}[stable/queens] CI jobs failing

Status in neutron:
  Won't Fix

Bug description:
  Backport of I320ac2306e0f25ff933d8271203e192486062d61 showed that gates in 
stein and older branches are not in good shape currently:
  [stable/stein] https://review.opendev.org/c/openstack/neutron/+/852752
neutron-tempest-plugin-api-stein
neutron-tempest-plugin-scenario-linuxbridge-stein
openstack-tox-py35
neutron-grenade
neutron-functional-python27
neutron-grenade-multinode
neutron-grenade-dvr-multinode
grenade-py3
  [stable/rocky] https://review.opendev.org/c/openstack/neutron/+/852753
all jobs failed with RETRY_LIMIT
File 
"/tmp/ansible_zuul_console_payload_jmrk60u_/ansible_zuul_console_payload.zip/ansible/modules/zuul_console.py",
 line 188
  conn.send(f'{ZUUL_CONSOLE_PROTO_VERSION}\n'.encode('utf-8'))
^
  SyntaxError: invalid syntax

   this may have been already fixed recently in rocky (general openstack CI 
issue)
  [stable/queens] https://review.opendev.org/c/openstack/neutron/+/852754
similar sea of red RETRY_LIMIT and same error as rocky

  
  With recent "Debian Buster with OpenStack Rocky will receive LTS support" 
mail from 
https://lists.openstack.org/pipermail/openstack-discuss/2022-August/029910.html 
we should at least get stein and then rocky back in shape

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1986682/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1995031] Re: [CI][periodic] neutron-functional-with-uwsgi-fips job failing

2023-01-17 Thread Slawek Kaplonski
It is likely fixed by
https://review.opendev.org/c/openstack/neutron/+/856261 so I'm closing
it now.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1995031

Title:
  [CI][periodic] neutron-functional-with-uwsgi-fips job failing

Status in neutron:
  Fix Released

Bug description:
  Sometimes periodic job neutron-functional-with-uwsgi-fips fails on
  following two tests:

  neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter
  test_dvr_router_with_centralized_fip_calls_keepalived_cidr [1]
  test_dvr_router_snat_namespace_with_interface_remove [2]

  
  Latest builds: 
https://zuul.openstack.org/builds?job_name=neutron-functional-with-uwsgi-fips&project=openstack%2Fneutron&branch=master&pipeline=periodic&skip=0

  
  1) 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_108/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-uwsgi-fips/10804fa/testr_results.html

  2)
  
https://662fbc83c91c32a8789e-45518917cf8baf33fe991d0324b9a061.ssl.cf2.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-
  functional-with-uwsgi-fips/4cad1c3/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1995031/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1905551] Re: functional: test_gateway_chassis_rebalance fails

2023-01-17 Thread Slawek Kaplonski
*** This bug is a duplicate of bug 1956344 ***
https://bugs.launchpad.net/bugs/1956344

** This bug has been marked a duplicate of bug 1956344
   Functional test test_gateway_chassis_rebalance is failing intermittently

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905551

Title:
  functional: test_gateway_chassis_rebalance fails

Status in neutron:
  Confirmed

Bug description:
  The test failure doesn't report much:

  ft1.10: 
neutron.tests.functional.services.ovn_l3.test_plugin.TestRouter.test_gateway_chassis_rebalancetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/services/ovn_l3/test_plugin.py",
 line 496, in test_gateway_chassis_rebalance
  self.assertTrue(self.cr_lrp_pb_event.wait(logical_port))
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/unittest2/case.py",
 line 702, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

  
  Observed here: 
https://ea5c37c06ce6e77863cd-be2db655edae902b1f8d9628c9b7e990.ssl.cf1.rackcdn.com/753847/18/gate/neutron-functional-with-uwsgi/4fa2ab3/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1905551/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1911214] Re: Scenario test test_multiple_ports_secgroup_inheritance fails in ovn scenario job

2023-01-17 Thread Slawek Kaplonski
** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1911214

Title:
  Scenario test test_multiple_ports_secgroup_inheritance fails in ovn
  scenario job

Status in neutron:
  Fix Released

Bug description:
  Failure:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 341, in test_multiple_ports_secgroup_inheritance
  self.ping_ip_address(fip['floating_ip_address'])
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 449, in ping_ip_address
  self.assertTrue(result)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/unittest2/case.py",
 line 702, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

  Logs:
  
https://bfc2304b36c89dd5efde-d71f4126f88f4263fd488933444cea49.ssl.cf1.rackcdn.com/740569/2/check/neutron-
  tempest-plugin-scenario-ovn/026535a/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1911214/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1912369] Re: [FT] "test_gateway_chassis_rebalance" failing because lrp is not bound

2023-01-17 Thread Slawek Kaplonski
*** This bug is a duplicate of bug 1956344 ***
https://bugs.launchpad.net/bugs/1956344

** This bug is no longer a duplicate of bug 1905551
   functional: test_gateway_chassis_rebalance fails
** This bug has been marked a duplicate of bug 1956344
   Functional test test_gateway_chassis_rebalance is failing intermittently

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1912369

Title:
  [FT] "test_gateway_chassis_rebalance" failing because lrp is not bound

Status in neutron:
  Confirmed

Bug description:
  "test_gateway_chassis_rebalance" failing because lrp is not bound on
  time.

  Logs:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_1f1/764433/7/gate/neutron-
  functional-with-uwsgi/1f14b81/testr_results.html

  Snippet: http://paste.openstack.org/show/801740/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1912369/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1918266] Re: Functional test test_gateway_chassis_rebalance failing due to "failed to bind logical router"

2023-01-17 Thread Slawek Kaplonski
*** This bug is a duplicate of bug 1956344 ***
https://bugs.launchpad.net/bugs/1956344

** This bug is no longer a duplicate of bug 1905551
   functional: test_gateway_chassis_rebalance fails
** This bug has been marked a duplicate of bug 1956344
   Functional test test_gateway_chassis_rebalance is failing intermittently

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1918266

Title:
  Functional test test_gateway_chassis_rebalance failing due to "failed
  to bind logical router"

Status in neutron:
  Confirmed

Bug description:
  Error example:
  
https://40b5766ce602bfb4b663-445d4465f34d2b24df5d805a76ff9803.ssl.cf1.rackcdn.com/765846/7/check/neutron-
  functional-with-uwsgi/928e7cf/testr_results.html

  Stacktrace: 
  ft1.14: 
neutron.tests.functional.services.ovn_l3.test_plugin.TestRouter.test_gateway_chassis_rebalancetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/services/ovn_l3/test_plugin.py",
 line 497, in test_gateway_chassis_rebalance
  self.assertTrue(self.cr_lrp_pb_event.wait(logical_port),
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/unittest2/case.py",
 line 702, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true : lrp 
cr-lrp-488b3887-1770-4a72-86cb-7306e78c954a failed to bind

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20497%2C%20in%20test_gateway_chassis_rebalance%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1918266/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928764] Re: Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing often with LB agent

2023-01-17 Thread Slawek Kaplonski
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1928764

Title:
  Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing
  often with LB agent

Status in neutron:
  Fix Released

Bug description:
  It seems that test
  
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart
  in various LB scenarios (flat, vxlan network) are failing recently
  pretty often.

  Examples of failures:

  
https://09f8e4e92bfb8d2ac89d-b41143eab52d80358d8555f964e9341b.ssl.cf5.rackcdn.com/670611/13/check/neutron-fullstack-with-uwsgi/8f51833/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_400/790288/1/check/neutron-fullstack-with-uwsgi/40025f9/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_400/790288/1/check/neutron-fullstack-with-uwsgi/40025f9/testr_results.html
  
https://0603beb4ddbd36de1165-42644bdefd5590a8f7e4e2e8a8a4112f.ssl.cf5.rackcdn.com/787956/1/check/neutron-fullstack-with-uwsgi/7640987/testr_results.html
  
https://e978bdcfc0235dcd9417-6560bc3b6382c1d289b358872777ca09.ssl.cf1.rackcdn.com/787956/1/check/neutron-fullstack-with-uwsgi/779913e/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0cb/789648/5/check/neutron-fullstack-with-uwsgi/0cb6d65/testr_results.html

  Stacktrace:

  ft1.1: 
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart(LB,Flat
 network)testtools.testresult.real._StringException: Traceback (most recent 
call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_connectivity.py",
 line 236, in test_l2_agent_restart
  self._assert_ping_during_agents_restart(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/base.py", 
line 123, in _assert_ping_during_agents_restart
  common_utils.wait_until_true(
File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
  next(self.gen)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 147, in async_ping
  f.result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result
  return self.__get_result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in 
__get_result
  raise self._exception
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
  result = self.fn(*self.args, **self.kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 128, in assert_async_ping
  ns_ip_wrapper.netns.execute(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/ip_lib.py", 
line 718, in execute
  return utils.execute(cmd, check_exit_code=check_exit_code,
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/utils.py", 
line 156, in execute
  raise exceptions.ProcessExecutionError(msg,
  neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Cmd: ['ip', 
'netns', 'exec', 'test-af70cf3a-c531-4fdf-ab4c-31cc69cc2c56', 'ping', '-W', 2, 
'-c', '1', '20.0.0.212']; Stdin: ; Stdout: PING 20.0.0.212 (20.0.0.212) 56(84) 
bytes of data.

  --- 20.0.0.212 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

  ; Stderr:


  I checked linuxbridge-agent logs (2 cases) and I found there error
  like below:

  2021-05-13 15:46:07.721 96421 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139960964907248]: (4, ()) _call_back 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:510
  2021-05-13 15:46:07.725 96421 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139960964907248]: (4, None) _call_back 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:510
  2021-05-13 15:46:07.728 96421 DEBUG oslo.privsep.daemon [-] privsep: 
Exception during request[139960964907248]: Network interface brqa235fa8c-09 not 
found in namespace None. _process_cmd 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:488
  Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py",
 line 485, in _process_cmd
  ret = func(*f_args, **f_kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-

[Yahoo-eng-team] [Bug 1940425] Re: test_live_migration_with_trunk tempest test fails due to port remains in down state

2023-01-17 Thread Slawek Kaplonski
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940425

Title:
  test_live_migration_with_trunk tempest test fails due to port remains
  in down state

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in os-vif:
  Incomplete

Bug description:
  Example failure is in [1]:

  2021-08-18 10:40:52,334 124842 DEBUG[tempest.lib.common.utils.test_utils] 
Call _is_port_status_active returns false in 60.00 seconds
  }}}

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in 
wrapper
  return func(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
  return f(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/api/compute/admin/test_live_migration.py", 
line 281, in test_live_migration_with_trunk
  self.assertTrue(
File "/usr/lib/python3.8/unittest/case.py", line 765, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

  Please note that a similar bug was reported and fixed previously
  https://bugs.launchpad.net/tempest/+bug/1924258 It seems that fix did
  not fully solved the issue.

  It is not super frequent I saw 4 occasions in the last 30 days [2].

  [1] 
https://zuul.opendev.org/t/openstack/build/fdbda223dc10456db58f922b6435f680/logs
  [2] https://paste.opendev.org/show/808166/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940425/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1862177] Re: Fullstack tests failing due to problem with connection to the fake placement service

2023-01-17 Thread Slawek Kaplonski
** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1862177

Title:
  Fullstack tests failing due to problem with connection to the fake
  placement service

Status in neutron:
  Fix Released

Bug description:
  Tests like
  
neutron.tests.fullstack.test_agent_bandwidth_report.TestPlacementBandwidthReport.test_configurations_are_synced_towards_placement(NIC
  Switch agent) are failing from time to time due to problem with
  connection from neutron-server to the fake placement service.

  Example of such error
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f66/703143/3/check/neutron-
  fullstack/f667c93/testr_results.html

  Error in neutron-server:

  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
[req-29a0f974-e549-4784-924e-0ee82cc8c910 - - - - -] Connection Error appeared: 
requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', 
port=8081): Max retries exceeded with url: 
/placement/resource_providers?name=ubuntu-bionic-ovh-bhs1-0014319701 (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 111] 
ECONNREFUSED',))
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client Traceback 
(most recent call last):
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/urllib3/connection.py",
 line 157, in _new_conn
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
(self._dns_host, self.port), self.timeout, **extra_kw
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/urllib3/util/connection.py",
 line 84, in create_connection
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client raise err
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/urllib3/util/connection.py",
 line 74, in create_connection
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
sock.connect(sa)
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/eventlet/greenio/base.py",
 line 267, in connect
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
socket_checkerr(fd)
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/eventlet/greenio/base.py",
 line 51, in socket_checkerr
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client raise 
socket.error(err, errno.errorcode[err])
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
ConnectionRefusedError: [Errno 111] ECONNREFUSED
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client During 
handling of the above exception, another exception occurred:
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client Traceback 
(most recent call last):
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/urllib3/connectionpool.py",
 line 672, in urlopen
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
chunked=chunked,
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/urllib3/connectionpool.py",
 line 387, in _make_request
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
conn.request(method, url, **httplib_request_kw)
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/usr/lib/python3.6/http/client.py", line 1254, in request
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
self._send_request(method, url, body, headers, encode_chunked)
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/usr/lib/python3.6/http/client.py", line 1300, in _send_request
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
self.endheaders(body, encode_chunked=encode_chunked)
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client   File 
"/usr/lib/python3.6/http/client.py", line 1249, in endheaders
  2020-02-04 12:09:41.164 29846 ERROR neutron_lib.placement.client 
self._send_output(message_body, encode_chunked=encode_chunked)
  202

[Yahoo-eng-team] [Bug 1893188] Re: [tempest] Randomize subnet CIDR to avoid test clashes

2023-01-17 Thread Slawek Kaplonski
** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1893188

Title:
  [tempest] Randomize subnet CIDR to avoid test clashes

Status in neutron:
  Fix Released

Bug description:
  Although each network is created with a different project_id, we still have 
errors like [1][2]:
  """
  neutronclient.common.exceptions.BadRequest: Invalid input for operation: 
Requested subnet with cidr: 20.0.0.0/24 for network: 
0eb8805e-8307-4c1e-86cd-6e764f6c4f9f overlaps with another subnet.
  Neutron server returns request_ids: 
['req-00b171b6-6d20-46d8-a859-b6e870034e7d']
  """

  
  
[1]https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0b7/745330/1/check/neutron-fullstack-with-uwsgi/0b75784/testr_results.html
  [2]http://paste.openstack.org/show/797203/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1893188/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2000164] Re: [FT] Error in "test_dvr_update_gateway_port_with_no_gw_port_in_namespace"

2023-01-17 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/869205
Committed: 
https://opendev.org/openstack/neutron/commit/c3620166204d2d12729ca1eba92feaadbe802d42
Submitter: "Zuul (22348)"
Branch:master

commit c3620166204d2d12729ca1eba92feaadbe802d42
Author: Slawek Kaplonski 
Date:   Wed Jan 4 13:40:45 2023 +0100

Ensure that MAC address of the device is set correctly

For unknown (for me at least) reason sometimes we observed e.g. in the
CI jobs that interfaces created by e.g. L3 agent didn't had properly set
MAC address to the one generated by Neutron.
To avoid that this patch adds check if the requested MAC was actually
set on the device before moving on to configure MTU and other attributes
of the device.

Co-Authored-By: Brian Haley 

Closes-bug: #2000164
Change-Id: I23facc53795a9592ccb137c60fb1f356406a4e00


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2000164

Title:
  [FT] Error in
  "test_dvr_update_gateway_port_with_no_gw_port_in_namespace"

Status in neutron:
  Fix Released

Bug description:
  Log:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9d5/866489/5/check/neutron-
  functional-with-uwsgi/9d5a735/testr_results.html

  Snippet: https://paste.opendev.org/show/blxk4RbR6T6I9xVKvhgR/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2000164/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799790] Re: Scenario job neutron_tempest_plugin.scenario.test_floatingip.FloatingIPPortDetailsTest.test_floatingip_port_details timeout

2023-01-17 Thread Slawek Kaplonski
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1799790

Title:
  Scenario job
  
neutron_tempest_plugin.scenario.test_floatingip.FloatingIPPortDetailsTest.test_floatingip_port_details
  timeout

Status in neutron:
  Won't Fix

Bug description:
  Job: any tempest plugin scenario jobs
  Failed test: 
neutron_tempest_plugin.scenario.test_floatingip.FloatingIPPortDetailsTest.test_floatingip_port_details
  Sample failure: 
http://logs.openstack.org/36/610536/1/check/neutron-tempest-plugin-scenario-linuxbridge/dbac8ec/job-output.txt.gz

   Traceback (most recent call last):
     File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_floatingip.py",
 line 247, in test_floatingip_port_details
   fip = self._wait_for_fip_port_down(fip['id'])
     File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_floatingip.py",
 line 317, in _wait_for_fip_port_down
   raise exceptions.TimeoutException(message)
   tempest.lib.exceptions.TimeoutException: Request timed out
   Details: Floating IP f7628133-7448-43d9-8fd3-f14784d9ecf9 attached port 
status failed to transition to DOWN (current status BUILD) within the required 
time (120 s).

  Logstash:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22attached%20port%20status%20failed%20to%20transition%20to%20DOWN%5C%22
  (10 hits in 7 days)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1799790/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2003048] [NEW] [Azure] User's customized dns search domains is over-written as re-config network on every boot

2023-01-17 Thread Huijuan Zhao
Public bug reported:

Description of problem:
This issue only exists on Azure cloud-init.
With the 22.1 version, we add dns search domains via nmcli con modify 'System 
eth0' +ipv4.dns-search domain.tld, after reboot, the config is over-written by 
cloud-init(the changes to /etc/sysconfig/network-scripts/ifcfg-eth0 get 
deleted, they used to be persistent), as Azure apply networking config on every 
BOOT[1] since cloud-init-22.1

[1] https://github.com/canonical/cloud-init/pull/1023


But some users prefer to persist the customized network config(e.g. DNS) after 
reboot, so could you please help to check how to enhance this patch[1] to meet 
customer's requirement[2]?

[2] User's requirements:
We add additional search domains to certain servers, mostly because they 
communicate with legacy urls in our company without using fqdn. 


Additional info:
Below[3] is a workaround to persist user's customized DNS. But as all the 
customized network configs will be over-written after reboot, so maybe we need 
to consider how to avoid it.

[3] Adding the customized DNS to /etc/dhcp/dhclient.conf, 
e.g. 
Modify /etc/dhcp/dhclient.conf
timeout 300;
append domain-search "searchdomain1.com";
append domain-search "searchdomain2.com";
append domain-name-servers 1.1.1.1;

Reboot the VM and then check the dns information, the modifications are not 
overwritten.
# cat /etc/resolv.conf 
# Generated by NetworkManager
search searchdomain1.com searchdomain2.com
nameserver 1.1.1.1


Test Veision: 22.1


How reproducible:
Modify dns search domains, reboot


Steps to Reproduce:
1. nmcli con modify 'System eth0' +ipv4.dns-search example.com
2. systemctl reboot


Actual results:
$ grep example.com /etc/resolv.conf
[No result]


Expected results:
$ grep example.com /etc/resolv.conf
search example.com

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/2003048

Title:
  [Azure] User's customized dns search domains is over-written as re-
  config network on every boot

Status in cloud-init:
  New

Bug description:
  Description of problem:
  This issue only exists on Azure cloud-init.
  With the 22.1 version, we add dns search domains via nmcli con modify 'System 
eth0' +ipv4.dns-search domain.tld, after reboot, the config is over-written by 
cloud-init(the changes to /etc/sysconfig/network-scripts/ifcfg-eth0 get 
deleted, they used to be persistent), as Azure apply networking config on every 
BOOT[1] since cloud-init-22.1

  [1] https://github.com/canonical/cloud-init/pull/1023

  
  But some users prefer to persist the customized network config(e.g. DNS) 
after reboot, so could you please help to check how to enhance this patch[1] to 
meet customer's requirement[2]?

  [2] User's requirements:
  We add additional search domains to certain servers, mostly because they 
communicate with legacy urls in our company without using fqdn. 

  
  Additional info:
  Below[3] is a workaround to persist user's customized DNS. But as all the 
customized network configs will be over-written after reboot, so maybe we need 
to consider how to avoid it.

  [3] Adding the customized DNS to /etc/dhcp/dhclient.conf, 
  e.g. 
  Modify /etc/dhcp/dhclient.conf
  timeout 300;
  append domain-search "searchdomain1.com";
  append domain-search "searchdomain2.com";
  append domain-name-servers 1.1.1.1;

  Reboot the VM and then check the dns information, the modifications are not 
overwritten.
  # cat /etc/resolv.conf 
  # Generated by NetworkManager
  search searchdomain1.com searchdomain2.com
  nameserver 1.1.1.1

  
  Test Veision: 22.1

  
  How reproducible:
  Modify dns search domains, reboot

  
  Steps to Reproduce:
  1. nmcli con modify 'System eth0' +ipv4.dns-search example.com
  2. systemctl reboot

  
  Actual results:
  $ grep example.com /etc/resolv.conf
  [No result]

  
  Expected results:
  $ grep example.com /etc/resolv.conf
  search example.com

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/2003048/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp