[Yahoo-eng-team] [Bug 1999391] [NEW] Attribute error in neutron-ovs-agent

2022-12-12 Thread Slawek Kaplonski
Public bug reported:

I noticed in the Openvswitch agent logs in the CI jobs stacktraces like:

Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc [None 
req-024660c2-8e49-4135-8322-2ea29fac319e None None] Failed to get details for 
device d9cdde27-90fa-425f-a42c-f8464806457f: AttributeError: 'NoneType' object 
has no attribute 'qos_policy_id'
Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc Traceback (most 
recent call last):
Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc   File 
"/opt/stack/neutron/neutron/agent/rpc.py", line 321, in 
get_devices_details_list_and_failed_devices
Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc 
self.get_device_details(context, device, agent_id, host,
Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc   File 
"/opt/stack/neutron/neutron/agent/rpc.py", line 355, in get_device_details
Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc 
qos_network_policy_id = net.qos_policy_id
Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc AttributeError: 
'NoneType' object has no attribute 'qos_policy_id'
Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc 

Example of such error:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d8f/860270/8/check/neutron-
tempest-plugin-openvswitch/d8f61ec/controller/logs/screen-q-agt.txt

It don't cause job failure but should be handled in better way in the
code to avoid ugly stacktraces.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: low-hanging-fruit ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999391

Title:
  Attribute error in neutron-ovs-agent

Status in neutron:
  Confirmed

Bug description:
  I noticed in the Openvswitch agent logs in the CI jobs stacktraces
  like:

  Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc [None 
req-024660c2-8e49-4135-8322-2ea29fac319e None None] Failed to get details for 
device d9cdde27-90fa-425f-a42c-f8464806457f: AttributeError: 'NoneType' object 
has no attribute 'qos_policy_id'
  Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc Traceback (most 
recent call last):
  Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc   File 
"/opt/stack/neutron/neutron/agent/rpc.py", line 321, in 
get_devices_details_list_and_failed_devices
  Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc 
self.get_device_details(context, device, agent_id, host,
  Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc   File 
"/opt/stack/neutron/neutron/agent/rpc.py", line 355, in get_device_details
  Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc 
qos_network_policy_id = net.qos_policy_id
  Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc AttributeError: 
'NoneType' object has no attribute 'qos_policy_id'
  Dec 12 09:44:51.014564 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-openvswitch-agent[57685]: ERROR neutron.agent.rpc 

  Example of such error:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d8f/860270/8/check/neutron-
  tempest-plugin-openvswitch/d8f61ec/controller/logs/screen-q-agt.txt

  It don't cause job failure but should be handled in better way in the
  code to avoid ugly stacktraces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999391/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999390] [NEW] TypeError raised in neutron-dhcp-agent due to missing argument in clean_devices method

2022-12-12 Thread Slawek Kaplonski
Public bug reported:

I noticed in the DHCP agent logs in the CI jobs stacktraces like:

Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent [-] Unable to 
clean_devices dhcp for e144dc34-3938-4fbe-9973-85094ea9ee08.: TypeError: 
DhcpLocalProcess.clean_devices() missing 1 required positional argument: 
'network'
Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 255, in _call_driver
Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent rv = 
getattr(driver, action)(**action_kwargs)
Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent TypeError: 
DhcpLocalProcess.clean_devices() missing 1 required positional argument: 
'network'

Example of such error:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d8f/860270/8/check/neutron-
tempest-plugin-openvswitch/d8f61ec/controller/logs/screen-q-dhcp.txt

It don't cause job failure but should be handled in better way in the
code to avoid ugly stacktraces.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: l3-ipam-dhcp low-hanging-fruit

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999390

Title:
  TypeError raised in neutron-dhcp-agent due to missing argument in
  clean_devices method

Status in neutron:
  Confirmed

Bug description:
  I noticed in the DHCP agent logs in the CI jobs stacktraces like:

  Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent [-] Unable to 
clean_devices dhcp for e144dc34-3938-4fbe-9973-85094ea9ee08.: TypeError: 
DhcpLocalProcess.clean_devices() missing 1 required positional argument: 
'network'
  Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 255, in _call_driver
  Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent rv = 
getattr(driver, action)(**action_kwargs)
  Dec 12 09:40:38.943795 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-dhcp-agent[58070]: ERROR neutron.agent.dhcp.agent TypeError: 
DhcpLocalProcess.clean_devices() missing 1 required positional argument: 
'network'

  Example of such error:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d8f/860270/8/check/neutron-
  tempest-plugin-openvswitch/d8f61ec/controller/logs/screen-q-dhcp.txt

  It don't cause job failure but should be handled in better way in the
  code to avoid ugly stacktraces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999390/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999388] [NEW] TypeError raised in neutron-dhcp-agent

2022-12-12 Thread Slawek Kaplonski
Public bug reported:

I noticed in the DHCP agent logs in the CI jobs stacktraces like:

Dec 07 21:47:19.127574 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: DEBUG neutron.agent.dhcp.agent [-] 
neutron.agent.dhcp.agent.DhcpAgentWithStateReport method _port_delete called 
with arguments ({'port_id': 'a9f73dbf-ba95-4371-ad3d-5cd494efa71b', 
'network_id': 'ab6f772f-4c97-4e6c-96d4-a1f89c6322aa', 'fixed_ips': 
[{'subnet_id': 'd7c9b0d4-1abc-4475-9030-3c1513bd2874', 'ip_address': 
'172.24.5.186'}, {'subnet_id': 'e398f2da-5b0d-4db1-82a4-7efb1d8905f5', 
'ip_address': '2001:db8::166'}], 'priority': 6},) {} {{(pid=58100) wrapper 
/usr/local/lib/python3.10/dist-packages/oslo_log/helpers.py:65}}
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: Traceback (most recent call last):
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/greenpool.py", line 88, in 
_spawn_n_impl
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: func(*args, **kwargs)
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]:   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 602, in 
_process_resource_update
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: method(update.resource)
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]:   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 75, in wrapped
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: return f(*args, **kwargs)
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]:   File 
"/usr/local/lib/python3.10/dist-packages/oslo_log/helpers.py", line 67, in 
wrapper
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: return method(*args, **kwargs)
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]:   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 730, in _port_delete
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: self.call_driver('clean_devices', network)
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]:   File 
"/usr/local/lib/python3.10/dist-packages/osprofiler/profiler.py", line 159, in 
wrapper
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: result = f(*args, **kwargs)
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]:   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 204, in call_driver
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: if 'segments' in network and network.segments:
Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: TypeError: argument of type 'NoneType' is not 
iterable

Example of such error:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d8f/860270/8/check/neutron-
tempest-plugin-openvswitch/d8f61ec/controller/logs/screen-q-dhcp.txt

It don't cause job failure but should be handled in better way in the
code to avoid ugly stacktraces.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: l3-ipam-dhcp low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999388

Title:
  TypeError raised in neutron-dhcp-agent

Status in neutron:
  Confirmed

Bug description:
  I noticed in the DHCP agent logs in the CI jobs stacktraces like:

  Dec 07 21:47:19.127574 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: DEBUG neutron.agent.dhcp.agent [-] 
neutron.agent.dhcp.agent.DhcpAgentWithStateReport method _port_delete called 
with arguments ({'port_id': 'a9f73dbf-ba95-4371-ad3d-5cd494efa71b', 
'network_id': 'ab6f772f-4c97-4e6c-96d4-a1f89c6322aa', 'fixed_ips': 
[{'subnet_id': 'd7c9b0d4-1abc-4475-9030-3c1513bd2874', 'ip_address': 
'172.24.5.186'}, {'subnet_id': 'e398f2da-5b0d-4db1-82a4-7efb1d8905f5', 
'ip_address': '2001:db8::166'}], 'priority': 6},) {} {{(pid=58100) wrapper 
/usr/local/lib/python3.10/dist-packages/oslo_log/helpers.py:65}}
  Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]: Traceback (most recent call last):
  Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-ovh-bhs1-0032438036 
neutron-dhcp-agent[58100]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/greenpool.py", line 88, in 
_spawn_n_impl
  Dec 07 21:47:19.128219 nested-virt-ubuntu-jammy-

[Yahoo-eng-team] [Bug 1908382] Re: [OVN] Missing OVN ACLs for security groups that utilize remote groups attached to ports with allowed_address_pairs

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1908382

Title:
  [OVN] Missing OVN ACLs for security groups that utilize remote groups
  attached to ports with allowed_address_pairs

Status in neutron:
  Won't Fix

Bug description:
  See mailing list thread started at
  http://lists.openstack.org/pipermail/openstack-
  discuss/2020-December/019442.html

  Bug discovered during magnum testing in ussuri, where pods deployed on
  different nodes could not communicate with each other - it has been
  traced to incorrect OVN ACLs for this specific scenario:

  - neutron port with additional subnet added to  allowed_address_pairs
  - security group created with a remote group set for both TCP and UDP, to 
allow traffic between subnet defined in allowed_address_pairs

  It resulted in TCP and UDP being dropped by OVN.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1908382/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1893314] Re: set neutron quota not take effect

2022-12-12 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1893314

Title:
  set neutron quota not  take effect

Status in neutron:
  Fix Released

Bug description:
  After create a new project, set the project's quota concurrently, eg: port=100
  Enter the neutron db, can found multiple quota records about this project and 
resource=port.

  After that, we set the quota for this project, the quota value
  returned each time remains unchanged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1893314/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887523] Re: Deadlock detection code can be stale

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887523

Title:
  Deadlock detection code can be stale

Status in neutron:
  Won't Fix

Bug description:
  oslo.db has plenty of infrastructure for detecting deadlocks, however,
  it seems that at the moment, neutron has it's own implementation of it
  which is missing a bunch of deadlocks, causing issues when doing work
  at scale.

  this bug is to track the work in refactoring all of this to use the
  native oslo retry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1887523/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999392] [NEW] Some events which are supposed to be run AFTER changes are commited are performed BEFORE commit

2022-12-12 Thread Slawek Kaplonski
Public bug reported:

I noticed such warnings in neutron-server logs in CI jobs. Example of
stacktrace:

Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: WARNING neutron.plugins.ml2.ovo_rpc [None 
req-8e39b39c-bbd4-4a17-84e8-f376b4857023 
tempest-L3AgentSchedulerTestJSON-1326864497 
tempest-L3AgentSchedulerTestJSON-1326864497-project] This handler is supposed 
to handle AFTER events, as in 'AFTER it's committed', not BEFORE. Offending 
resource event: port, after_delete. Location:
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/greenthread.py", line 221, in 
main
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: result = function(*args, **kwargs)
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 837, in 
process_request
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: proto.__init__(conn_state, self)
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 350, in 
__init__
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: self.handle()
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 383, in handle
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: self.handle_one_request()
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 459, in 
handle_one_request
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: self.handle_one_response()
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 569, in 
handle_one_response
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: result = self.application(self.environ, 
start_response)
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/paste/urlmap.py", line 216, in __call__
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: return app(environ, start_response)
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 129, in __call__
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: resp = self.call_func(req, *args, **kw)
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 193, in call_func
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: return self.func(req, *args, **kwargs)
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/oslo_middleware/base.py", line 124, in 
__call__
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: response = req.get_response(self.application)
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/request.py", line 1313, in send
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: status, headers, app_iter = self.call_application(
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/request.py", line 1278, in 
call_application
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: app_iter = application(self.environ, start_response)
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 129, in __call__
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: resp = self.call_func(req, *args, **kw)
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 193, in call_func
Dec 12 09:40:26.777271 nested-virt-ubuntu-jammy-ovh-bhs1-0032473968 
neutron-server[56690]: return self.func(req, *args, **kwargs)
Dec 12 0

[Yahoo-eng-team] [Bug 1672433] Re: dhcp-agent should send a grace ARP after assigning IP address in dhcp namespace

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672433

Title:
  dhcp-agent should send a grace ARP after assigning IP address in dhcp
  namespace

Status in neutron:
  Won't Fix

Bug description:
  Normally dhcp agents should not provide routable services. There is
  one exception: monitoring. Checking dhcp agents availability by
  sending PING requests is very easy and sits well with existing
  monitoring frameworks. Outside of checking of availability of DHCP
  agent itself that check allows to test network connectivity between
  DHCP-agent and network equipment.

  There is a specific scenario for DHCP agent when that check gives
  false reports.

  Scenario:
  1. Boot instance with a give IP, assure that instance is UP (replies to 
pings).
  2. Delete instance.
  3. Add dhcp agent to net network where IP (from step1) is allocated in such a 
way that it would take that IP (from step1).

  Expected behavior: DHCP agent should answer pings.
  Actual behavior: DHCP agent does not reply to pings for up to 4 hours, than 
spontaneously start to reply.

  Reason: Instance (from step1) updated ARP table on the router. When
  instance was removed and DHCP agent start listen on that IP, it didn't
  send gracious (probe) ARP. Normal workflow for DHCP does not require
  it to send any traffic through router, therefore there is no reason
  for router to update entry in ARP table. As long as router keep old
  (invalid) entry pointing to old instance (from step1), DHCP couldn't
  reply to the pings because every incoming request is coming with wrong
  MAC destination address.

  Proposal: dhcp agent should either:

  1. Send some kind of network packet to network gateway (f.e. ping request).
  2. Set arp_notify for network interface is uses (f.e.
  net.ipv4.conf.tap22dad33f-d7.arp_notify=1), and configure network address 
_BEFORE_ bringing interface up. If address is configured after interface was 
brought up, it wouldn't send gracious ARP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1672433/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621717] Re: Delete agent will not delete related SegmentHostMapping

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621717

Title:
  Delete agent will not delete related SegmentHostMapping

Status in neutron:
  Won't Fix

Bug description:
  SegmentHostMapping will record the relationship between segments and
  hosts. If admin delete an agent, related SegmentHostMapping should be
  cleared too. Or else, other logic which leverage SegmentHostMapping
  will still think the host is available. This will cause error is admin
  just want to remove a node from openstack topology.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621717/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745412] Re: test_l3_agent_scheduler intermittent failures when running locally

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1745412

Title:
  test_l3_agent_scheduler intermittent failures when running locally

Status in neutron:
  Won't Fix

Bug description:
  As part of my dev process, I run the py27 target frequently on my local 
workspace dev env to verify changes/fixes.
  Over the past few months I've begun to get intermittent failures in the 
test_l3_agent_scheduler module. I've collected the failures from the past week 
and posted them on pastebin [1]. Note that some of these failures are not 
test_l3_agent_scheduler, so disregard them for this particular bug report.

  To try and eliminate the possibility these failures could be related
  to left over artifacts from previous py27 in my workspace, I've tried
  deleting .tox/, .stestr/, etc. in my workspace before running py27. It
  doesn't seem to help.

  Therefore it seems there maybe a race/timing issue in these tests.
  Interestingly enough I don't find any hits with logstash, so it's not clear 
why I'm getting them locally (say 1 of every 4-5 runs of py27).

  
  [1] http://paste.openstack.org/show/653447/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1745412/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1740068] Re: lost composite primary key in firewall_group_port_associations_v2

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1740068

Title:
  lost composite primary key in firewall_group_port_associations_v2

Status in neutron:
  Won't Fix

Bug description:
  hi all:

  here lost primary_key in both column,

  
neutron_fwaas/db/migration/alembic_migrations/versions/newton/expand/d6a12e637e28_neutron_fwaas_v2_0.py
  op.create_table(
  'firewall_group_port_associations_v2',
  sa.Column('firewall_group_id', sa.String(length=36),
sa.ForeignKey('firewall_groups_v2.id', ondelete='CASCADE')),
  sa.Column('port_id', sa.String(length=36),
sa.ForeignKey('ports.id', ondelete='CASCADE'))
  )

  
  the model of this table have the primary_key

  neutron_fwaas/db/firewall/v2/firewall_db_v2.py
  class FirewallGroupPortAssociation(model_base.BASEV2):
  __tablename__ = 'firewall_group_port_associations_v2'
  firewall_group_id = sa.Column(sa.String(db_constants.UUID_FIELD_SIZE),
sa.ForeignKey('firewall_groups_v2.id',
  ondelete="CASCADE"),
primary_key=True)
  port_id = sa.Column(sa.String(db_constants.UUID_FIELD_SIZE),
  sa.ForeignKey('ports.id', ondelete="CASCADE"),
  unique=True,
  primary_key=True)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1740068/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671709] Re: test_add_list_remove_router_on_l3_agent intermittently failing for DVR+HA gate job

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1671709

Title:
  test_add_list_remove_router_on_l3_agent intermittently failing for
  DVR+HA gate job

Status in neutron:
  Won't Fix

Bug description:
  This test is intermittently failing [1] for dvr+ha gate job.
  DVR+HA gate job will be running in a 3 node devstack setup. Two nodes will be 
running l3 agent in "dvr_snat" node and another in "dvr" mode. 
  In this job, for dvr routers, we call add_router_interface[2] and then add 
this router to l3 agent [3].
  1) add_router_interface will by default add router to one of the dvr_snat 
agents, for example agent1
  2) when the test is again trying to add explicitly the router to agent,
 a) if adding to same agent i.e agent1, then neutron skips this request 
[4], hence test passes 
 b) if adding to other agent i.e agent2, then neutron raises exception [5], 
and test fails 

  Till now we have only two nodes(one dvr and another dvr-snat) in gate
  for dvr multi node setup. So this test is passing as we are trying to
  add to same snat agent(and neuron skips this request). As we are not
  really testing any functionality for dvr, we should skip this test for
  dvr routers.

  [1] 
http://logs.openstack.org/33/383833/3/experimental/gate-tempest-dsvm-neutron-dvr-ha-multinode-full-ubuntu-xenial-nv/8862b07/logs/testr_results.html.gz
  [2] 
https://github.com/openstack/tempest/blob/master/tempest/api/network/admin/test_l3_agent_scheduler.py#L84
  [3] 
https://github.com/openstack/tempest/blob/master/tempest/api/network/admin/test_l3_agent_scheduler.py#L114
  [4] 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L154
  [5] 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L159

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1671709/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670002] Re: neutron allows to create net with segmentation id 0 with physical network and with physical network as None

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1670002

Title:
  neutron allows to create net with segmentation id 0 with physical
  network and with physical network as None

Status in neutron:
  Won't Fix

Bug description:
  neutron allow to create network with segmentation id as 0. with and
  without physical network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1670002/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630603] Re: internal dns is not updated correctly on nova boot

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630603

Title:
  internal dns is not updated correctly on nova boot

Status in neutron:
  Won't Fix

Bug description:
  I'm facing a problem related to dns_name and dnsmasq.

  Nova and Neutron can create a port with dns_name and dnsmasq it's
  correctly updating "/addn_host".

  The problem is when Nova boot an instance using a new or previusly
  created port. The port has the correct dns_name but dnsmaq
  (dhcp_agent) it's using the generic (ex. host-10-0-0-16) names.

  If I restart dhcp_agent or do a port-update on the port, the correct
  name is added to addn_host.

  
  I'm using kolla on 3 nodes, images build 
source:http://tarballs.openstack.org/neutron/neutron-stable-mitaka.tar.gz (from 
1 week ago) on Ubuntu 16.06.

  
  Step-by-step reproduction steps:
  Please check this out: http://paste.openstack.org/show/584464/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630603/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1807396] Re: With many VMs on the same tenant, the L3 ip neigh add is too slow

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1807396

Title:
  With many VMs on the same tenant, the L3 ip neigh add is too slow

Status in neutron:
  Won't Fix

Bug description:
  In our setup, we run with DVR, and really a lot of VMs in the same
  tenant/project (we have currently between 1500 and 2000 VMs). In such
  setup, the internal function _set_subnet_arp_info of
  neutron/agent/l3/dvr_local_router.py is taking a way too long. Indeed,
  what it does is, on each compute node (since we use a Neutron L3
  router on each compute), operations like:

  ip neigh add

  for every VM in the project. As we have both ipv4 and ipv6, the L3
  agent does this twice. In our setup, this results in about 4000 Python
  processes that have to be spawned to execute the "ip neigh add"
  command. This takes between 20 and 30 minutes, each time we either:

  - Add a first VM from the tenant to the host
  - Restart the compute node
  - Restart the L3 agent

  So, there's this issue with "ip neigh add", though there's also the
  same kind of issue when OVS is doing:

  ovs-vsctl add-flows

  about 2000 times as well.

  So in other words, this doesn't scale, and this needs to be addressed,
  so that the L3 agent can react in a reasonable mater to operations on
  the DVRs when there's many VMs in the same project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1807396/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739219] Re: Old dnsmasq listed as option:dns-server

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1739219

Title:
  Old dnsmasq listed as option:dns-server

Status in neutron:
  Won't Fix

Bug description:
  Pike, regular Neutron LinuxBridge, I have the following situation:

  Went from a 4 network nodes setup to a 3 networks nodes setup, so one
  dhcp agent was dropped. It was shut down as well as removed using
  `openstack network agent delete`.

  Issue is about the generated list of DNS servers for a subnet that
  doesn't explicitly define DNS servers. After the removal of the fourth
  dhcp agent, the corresponding dnsmasq IP address is still included in
  the option:dns-server parameter of the generated dnsmasq dhcp config.

  As a result, instances in such a subnet get a list of DNS servers with
  one that is down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1739219/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1732448] Re: segment host mapping table overwritten by multiple l2 agents

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1732448

Title:
  segment host mapping table overwritten by multiple l2 agents

Status in neutron:
  Won't Fix

Bug description:
  The routed-networks feature introduced changes to create mapping
  entries linking hosts to segments [1].  That feature assumes that only
  a single L2 agent will be reporting device mappings to physical
  networks.  If multiple agents are running (e.g., OVS + SRIOV) then the
  "segmenthostmappings" tuples which are created by a first agent
  reporting state are overwritten when subsequent agents report state.

  The segments data model and agent update callbacks [2] do not handle
  cases where multiple L2 agents are present on a node.

  The configuration to reproduce this is simple. Create a compute node
  where the OVS agent reports interface mappings to physnet0 and the
  SRIOV agent reports device mappings to physnet1. Then create a network
  with provider:physical_network=physnet1, and then another network with
  provider:physical_network=physnet0.  Those networks will each create a
  NetworkSegment entry.  Then restart the OVS agent.  This will cause
  the segmenthostmappings table to be overwritten with only those
  segments accessible via the OVS agent.  Then restart the SRIOV agent.
  This will cause the segmenthostmappings table to be overwritten with
  only those segments accessible via the SRIOV agent.

  This impacts the scheduling of DHCP networks and IP address
  allocations for networks that have routed segments enabled.

  [1] 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3d3f0595ebb3a20949d52b7226a2d4551f0eaf6d
  [2] neutron.services.segments.db._update_segment_host_mapping_for_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1732448/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717560] Re: [RFE] allow to have no default route in DHCP host routes

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1717560

Title:
  [RFE] allow to have no default route in DHCP host routes

Status in neutron:
  Won't Fix

Bug description:
  When a user wants a VM with multiple interfaces, if DHCP gives a
  default route for all of the corresponding subnets, the actual
  interface that will actually be used as a default is not easily
  predictable (depends on the order in which interfaces are enabled in
  the VM, and on when DHCP offers are received and processed).

  A solution to this can be to *not* set a default gateway on the
  subnets which we don't want to use as a default, but it is only
  applicable if there is no need to use these interfaces to reach one or
  more (non default) prefixes.

  In the case where one interface needs to be the default and one or
  more other interfaces are used to reach other subnets via a router,
  what people most often do is have custom teaks via cloud-init that fix
  routing, but this is of course cumbersome.

  This is an RFE for introducing an API extension for a new
  'default_route' attribute on the subnet resource, this attribute would
  default to true (current behavior), and that could be set to false by
  a user whenever there is a need to *not* have a default route on the
  router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1717560/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928465] Re: Geneve allocation is not update during migration from ml2/ovs to ovn

2022-12-12 Thread Rodolfo Alonso
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1928465

Title:
  Geneve allocation is not update during migration from ml2/ovs to ovn

Status in neutron:
  Fix Released

Bug description:
  After migration is finished, geneve vni allocations are all empty. It
  means newly created networks may have same segmentation id as existing
  migrated networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1928465/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1922934] Re: [OVN] LSP register race condition with two controllers

2022-12-12 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1922934

Title:
  [OVN] LSP register race condition with two controllers

Status in neutron:
  Fix Released

Bug description:
  A race condition between two Neutron controller happened during the
  creation and the binding of a port. This problem happened when one
  Neutron controller received the port creation command. The controller
  added this new LSP to the OVN database.

  But the second controller does not receive the OVN database update and
  does not update the local database cache (in the IDL instance). That
  means, one second after the port creation done in the first
  controller, the second controller does not find the LSP.

  Nova error: http://paste.openstack.org/show/804261/
  First Neutron controller adding the port: 
http://paste.openstack.org/show/804262/
  Second Neutron controller failing to find the port: 
http://paste.openstack.org/show/804263/

  As seen in the logs, the second controller did not receive the
  transaction update adding the LSP.

  Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1946262

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1922934/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1811390] Re: L2/L3 Network components creation with wrong tenant-id should be restricted

2022-12-12 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1811390

Title:
  L2/L3 Network components creation with wrong tenant-id should be
  restricted

Status in neutron:
  Won't Fix

Bug description:
  I am trying to create vlan_transparent enabled network openstack queens setup.
  While creating network I am providing wrong tenant id, but instead of failing 
it is creating that network with
  provided tenant id.

  nicira@utu1604template:~/devstack$ openstack project list
  +--++
  | ID   | Name   |
  +--++
  | 0906736f01d948318ad5c89e45a04076 | admin  |
  | 19d3974dadb04aeeac363086a7e6b5bf | alt_demo   |
  | ab633c528a7a40a089c02b27a7495038 | invisible_to_admin |
  | dd8213720a5a4e5c85db304e8992d3c1 | service|
  | f36b83bc02074eacacef7a15b48690b0 | demo   |
  +--++

  nicira@utu1604template:~/devstack$ neutron net-create 
--provider:network_type=vlan  --vlan-transparent true --tenant-id 
7838ggf2372d2139fgf922ff  TestNet
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones| defaultv3|
  | created_at| 2018-06-12T09:23:09Z |
  | description   |  |
  | dns_domain|  |
  | id| 38c7eb87-6d83-479a-a56e-da7c27d36f15 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | name  | TestNet  |
  | port_security_enabled | True |
  | project_id| 7838ggf2372d2139fgf922ff |
  | provider:network_type | vlan |
  | provider:physical_network | 060d4788-19ae-4ba5-a369-4bb2079f50eb |
  | provider:segmentation_id  | 0|
  | qos_policy_id |  |
  | revision_number   | 3|
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  |  |
  | tenant_id | 7838ggf2372d2139fgf922ff |
  | updated_at| 2018-06-12T09:23:09Z |
  | vlan_transparent  | True |
  +---+--+

  I am facing same issue while creating subnet also.

  nicira@utu1604template:~/devstack$ neutron subnet-create --tenant-id 
89325t389256932532506329jsfhkjsfgwsjfbwejf --name testSubnet --disable-dhcp 
dee859b4-fae6-429e-b290-9711ec205da2 20.0.0.0/24
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new subnet:
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {"start": "20.0.0.2", "end": "20.0.0.254"} |
  | cidr  | 20.0.0.0/24|
  | created_at| 2018-06-12T09:38:41Z   |
  | description   ||
  | dns_nameservers   ||
  | enable_dhcp   | False  |
  | gateway_ip| 20.0.0.1   |
  | host_routes   ||
  | id| 51a77524-1b5a-4ab1-b98b-c170f656d6df   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  | testSubnet |
  | network

[Yahoo-eng-team] [Bug 1747534] Re: List networks with 'subnets' as filter return 501

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1747534

Title:
  List networks with 'subnets' as filter return 501

Status in neutron:
  Won't Fix

Bug description:
  Description
  ===
  Neutron return 501 on listing networks with 'subnets=xxx' as filter. This 
response looks odds for the following reasons:
  * Arguably, neutron should return 4xx in this case since it is due to an 
invalid user input.
  * The error message seems to have too much low-level details.

  Reproduce
  =

  $ TOKEN=$(openstack token issue | awk '/ id /{print $4}')
  $ curl -g -i -X GET -H "X-Auth-Token: $TOKEN" 
'http://10.0.0.19:9696/v2.0/networks?subnets=11'
  HTTP/1.1 501 Not Implemented
  Content-Type: application/json
  Content-Length: 203
  X-Openstack-Request-Id: req-9e3fc06f-ce9b-4e19-bfc0-63cfcec511db
  Date: Mon, 05 Feb 2018 22:21:13 GMT

  {"NotImplementedError": {"message": "in_() not yet supported for
  relationships.  For a simple many-to-one, use in_() against the set of
  foreign key values.", "type": "NotImplementedError", "detail": ""}}

  Expected
  
  Neutron return 4xx error with high-level error message

  Actual
  ==
  Neutron return 501 error with low-level error message

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1747534/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867869] Re: [OVN] Remove dependency on fip_object

2022-12-12 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1867869

Title:
  [OVN] Remove dependency on fip_object

Status in neutron:
  Fix Released

Bug description:
  The 'update_floatingip' and 'delete_floatingip' method currently
  depend on ‘fip_object’, which was added to keep things backward
  compatible.

  Currently, the 'fip_id' has been stored in the 'external_ids' of NAT,
  original fip can be obtained from 'ovn_fip', so 'fip_object' can be
  removed.

  
  
https://opendev.org/openstack/neutron/src/branch/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1074
  
https://opendev.org/openstack/neutron/src/branch/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1112

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1867869/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1847210] Re: '--sql' option of neutron-db-manage does not work

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1847210

Title:
  '--sql' option of neutron-db-manage does not work

Status in neutron:
  Won't Fix

Bug description:
  Version: stable/stein

  Log below.

  (.stein) root@krane-pgstage-api1:~# neutron-db-manage upgrade 804a3c76314c 
--sql
Running upgrade for neutron ...
  INFO  [alembic.runtime.migration] Context impl MySQLImpl.
  INFO  [alembic.runtime.migration] Generating static SQL
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  CREATE TABLE alembic_version (
  version_num VARCHAR(32) NOT NULL,
  CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)
  )ENGINE=InnoDB;

  INFO  [alembic.runtime.migration] Running upgrade  -> kilo
  -- Running upgrade  -> kilo

  Traceback (most recent call last):
File "/opt/openstack/src/neutron/.stein/bin/neutron-db-manage", line 11, in 

  sys.exit(main())
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/neutron/db/migration/cli.py",
 line 656, in main
  return_val |= bool(CONF.command.func(config, CONF.command.name))
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/neutron/db/migration/cli.py",
 line 180, in do_upgrade
  desc=branch, sql=CONF.command.sql)
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/neutron/db/migration/cli.py",
 line 81, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/alembic/command.py",
 line 276, in upgrade
  script.run_env()
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/alembic/script/base.py",
 line 475, in run_env
  util.load_python_file(self.dir, "env.py")
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/alembic/util/pyfiles.py",
 line 90, in load_python_file
  module = load_module_py(module_id, path)
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/alembic/util/compat.py",
 line 156, in load_module_py
  spec.loader.exec_module(module)
File "", line 665, in exec_module
File "", line 222, in _call_with_frames_removed
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 118, in 
  run_migrations_offline()
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 88, in run_migrations_offline
  context.run_migrations()
File "", line 8, in run_migrations
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/alembic/runtime/environment.py",
 line 839, in run_migrations
  self.get_context().run_migrations(**kw)
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/alembic/runtime/migration.py",
 line 361, in run_migrations
  step.migration_fn(**kw)
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/neutron/db/migration/alembic_migrations/versions/kilo_initial.py",
 line 52, in upgrade
  migration.pk_on_alembic_version_table()
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/neutron/db/migration/__init__.py",
 line 266, in pk_on_alembic_version_table
  inspector = reflection.Inspector.from_engine(op.get_bind())
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/sqlalchemy/engine/reflection.py",
 line 136, in from_engine
  return Inspector(bind)
File 
"/opt/openstack/src/neutron/.stein/lib/python3.5/site-packages/sqlalchemy/engine/reflection.py",
 line 110, in __init__
  bind.connect().close()
  AttributeError: 'NoneType' object has no attribute 'close'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1847210/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816395] Re: L2 Networking with SR-IOV enabled NICs in neutron

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816395

Title:
  L2 Networking with SR-IOV enabled NICs in neutron

Status in neutron:
  Won't Fix

Bug description:
  for text SR-IOV Passthrough For Networking is broken , it should point to 
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
  instead of the current one
  https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking/

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 11.0.7.dev52 on 2019-02-12 18:51
  SHA: 58025f12c93b59b004e07e3412ac7db519070516
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/contributor/internals/sriov_nic_agent.rst
  URL: 
https://docs.openstack.org/neutron/pike/contributor/internals/sriov_nic_agent.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1816395/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1805126] Re: confusing wording in config-dns-int.rst

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1805126

Title:
  confusing wording in config-dns-int.rst

Status in neutron:
  Won't Fix

Bug description:
  "``dns_domain`` functionality for ports" should be "``dns_domain for
  ports`` functionality" i guess

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1805126/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788023] Re: neutron does not form mesh tunnel overly between different ml2 driver.

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1788023

Title:
  neutron does not form mesh tunnel overly between different ml2 driver.

Status in neutron:
  Won't Fix

Bug description:
  * Summary: neutron does not form mesh tunnel overly between different
  ml2 driver.

  * High level description: When using multiple neutron ml2/driver it is 
expected that vms on host with different ml2 backend should be able to 
comunicate as segmentatoin type/ids are centralised in neutron and are not 
backend specific. when using provider networks this work however when using 
vxlan
  or other tunneld network that require unicast mesh networks to be created 
fails.

  
  * Step-by-step reproduction steps: 
  deploy a multinode devstack with both linux bridge and ovs nodes.
  on the linux bridge nodes set the vxlan dest_udp port to the inan value
  so that it is the same port used by ovs.

  [[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
  [vxlan]
  udp_dstport=4789

  and set the vxlan  multi cast group to none to force unicast mode.

  [ml2_type_vxlan]
  vxlan_group=""

  boot a vm on the same neutron network on both a linux bridge node and
  ovs node.

  
  * Expected output: 
  in this case we would expect the ovs l2 agent to create
  a unicast vxlan tunnel port on br-tun between the ovs node and the linux 
bridge node.

  similarly we expect the linux bridge agent to configure the recipcal 
connection and update
  the forwarding table with the ovs enpoints.

  we would also expect the l2 agent on the ovs compute ndoe to create a vxlan 
tunnel port
  to the networking node where the dhcp server is running.

  when the vms are booted we would expect both vms to recive ips and security 
groups
  correctly congifured we expect both vms to be able to ping each other.


  * Actual output: 
  the ovs l2 agent only create unicast tunnels to other ovs nodes.
  i did not check if the linux bridge agent set up its side of the connecttion 
for
  ovs nodes but it did configure connectivy to other linux bridge nodes.

  as a result network connectivy was partionioned with no cross backend
  connectivity possible.

  this is different from the vlan and flat behaviour where network
  connectivity works as expected.

  
  * Version:
** rocky RC1 nova sha: afe4512bf66c89a061b1a7ccd3e7ac8e3b1b284d neutron 
sha: 1dda2bca862b1268c0f5ae39b7508f1b1cab6f15

** Centos 7.5
** DevStack
  * Environment: libvirt/kvm with default devstack config/service

  * Perceived severity: low (this prevents using hetrogeious backend with 
tunned networks
 as a result, you cannot optimes some node for 
specifc workload. 
 e.g. linux bridge has better multicast scaling but 
ovs has better perfromance

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1788023/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1787420] Re: Floating ip association to router interface should be restricted

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1787420

Title:
  Floating ip association to router interface should be restricted

Status in neutron:
  Won't Fix

Bug description:
  We found this bug using the vmware-nsx plugin, but should be
  applicable to all plugins support L3.

  Created devstack_master + vmware-nsx

  Created router-interface and assigned fip's to router interface which is 
allowed.
  I dont find any usecase to assign ip to router port other than its LB vip 
port.

  Main reason for restricted this:
  -> To remove unwanted entries of fip from neutron db.
  -> To reduce overhead of using floating ip pool (other pool may get 
exhausted).


  REPO STEPS:

  myuser@kvm-compute-node1:~/devstack$ neutron router-port-list rtr3
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--+--+--+---++
  | id   | name | tenant_id 
   | mac_address   | fixed_ips  
|
  
+--+--+--+---++
  | 3318efcd-fcd1-4dda-bdde-4c8a19fbee3a |  |   
   | fa:16:3e:c1:00:fd | {"subnet_id": "afb2f79d-3c25-47de-a273-27bab2b78800", 
"ip_address": "172.24.0.19"} |
  | 8fcda443-dd4d-431f-ba3d-fbd5764830d9 |  | 
00b7a6f394e946688c83545da6a27804 | fa:16:3e:9a:a1:3e | {"subnet_id": 
"7ff038d6-3b3c-4127-a45a-f135ac07f3bb", "ip_address": "3.0.100.1"}   |
  | f6d54233-a8aa-4304-bc16-20f0071dfc47 |  | 
00b7a6f394e946688c83545da6a27804 | fa:16:3e:99:35:61 | {"subnet_id": 
"c16dce8d-899e-45f7-b615-557c2e231ce5", "ip_address": "3.3.100.1"}   |
  
+--+--+--+---++


  myuser@kvm-compute-node1:~/devstack$ neutron port-show 
8fcda443-dd4d-431f-ba3d-fbd5764830d9
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--+--+
  | Field| Value
|
  
+--+--+
  | admin_state_up   | True 
|
  | allowed_address_pairs|  
|
  | binding:host_id  |  
|
  | binding:vif_details  | {"ovs_hybrid_plug": false, 
"nsx-logical-switch-id": "c1a562e9-54bd-4ca6-9071-d622155e7ee6", "port_filter": 
true} |
  | binding:vif_type | ovs  
|
  | binding:vnic_type| normal   
|
  | created_at   | 2018-08-13T16:19:11Z 
|
  | description  |  
|
  | device_id| 0fa3bbcd-2a24-4c1d-ba56-d7e2c88a60ba 
|
  | device_owner | network:router_interface 
|
  | dns_assignment   | {"hostname": "host-3-0-100-1", "ip_address": 
"3.0.100.1", "fqdn": "host-3-0-100-1.somedom.org."} |
  | dns_name |  
|
  | extra_dhcp_opts  |  
|
  | fixed_ips

[Yahoo-eng-team] [Bug 1616208] Re: [RFE] Support creating a subnet without an allocation pool

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1616208

Title:
  [RFE] Support creating a subnet without an allocation pool

Status in neutron:
  Won't Fix
Status in python-openstackclient:
  New

Bug description:
  Problem Description
  ===

  Currently, subnets are created with an allocation pool(s) that is
  either a) user-defined or b) automatically generated based on CIDR.
  This RFE asks that the community support the creation of subnets
  without an allocation pool.

  Neutron allows users to create ports using fixed IP addresses that
  fall outside of the subnet allocation pool(s) but within the range
  defined by the CIDR. Neutron keeps track of assigned addresses and
  does not allow for overlap within the same subnet.

  Use cases:
  * An external IPAM service is utilized that is not integrated with 
OpenStack/Neutron. The user wants to create a port with a specific IP address 
using the --fixed-ip flag, and does not want Neutron automatically consuming 
addresses from the pool if an address is not manually allocated via Neutron or 
Nova. 

  
  Proposed Change
  ===

  
  Allow 'None', or similar value, as a valid start/end value. The result would 
be that Neutron would not create an allocation pool for the subnet. The Neutron 
client would have a new flag, such as --no-allocation-pool, or something 
similar.

  As I see it, not creating an allocation pool for a subnet would mean
  that when a port is created without an IP specified, Neutron would
  return the 'no more addresses available for allocation' error.
  Otherwise, the current behavior of allowing the user to specify a
  particular fixed IP address remains the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1616208/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587401] Re: Helper method to change status of port to abnormal state is needed in ml2.

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1587401

Title:
  Helper method to change status of port to abnormal state is needed in
  ml2.

Status in neutron:
  Won't Fix

Bug description:
  Some mechanism drivers cooperate with another backend(SDN controller).
  In this case, drivers may want to change status of port so that
  user can recognize process for the port is failed when failure in calling to 
backend.

  However, currently there is no helper function in PortContext to change 
status of port to
  abnormal status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1587401/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580891] Re: Move SR-IOV Agent to common agent framework

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580891

Title:
  Move SR-IOV Agent to common agent framework

Status in neutron:
  Won't Fix

Bug description:
  As the linuxbridge and the macvtap agent now share a lot of common
  code via the common agent class, it's time to move the sr-iov agent
  there as well!

  This would be a 3 step approach:
  #1 Refactor the common agent to match some sr-iov agent requirements
  #2 Refactor the sr-iov agent to match the code structure of the common agent
  #3 Bring both together

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580891/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630981] Re: [rfe] Implement l2pop driver functionality in l2 agent

2022-12-12 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630981

Title:
  [rfe] Implement l2pop driver functionality in l2 agent

Status in neutron:
  Won't Fix

Bug description:
  Summary
  ===
  We can implement l2pop driver functionality in l2 agent. Then we don't need a 
separate l2pop driver(we can still maintain it for backward compatibility)
  L2 agents(in master code base) already has all the information(some changes 
needed) to create flows on its own without l2pop driver.

  
  Existing functionality of l2pop:
  ===
  L2pop driver on server side -
  1) when port binding or status or ip_address(or mac) is updated, notify this 
port's FDB(i.e port's ip address, mac, hosting agent's tunnel ip) to all agents
  2) also checks if it is the first or last port on the hosting agent's 
network, if so
     a) notifies all agents to create/delete tunnels and flood flows to hosting 
agent
     b) notifies hosting agent about all the existing network ports on other 
agents(so that hosting agent can create tunnels, flood and unicast flows to 
these ports).
  L2pop on agent side, after receiving notification from l2pop driver, 
creates/deletes tunnel endpoints, flood, unicast and ARP ovs flows to other 
agent's ports.

  
  New Implementation:
  ==
  But the same functionality can be achieved without l2pop. Whenever a port is 
created/updated/deleted, l2 agents get that port(ip, mac and host_id) through 
port_update and port_delete RPC notifications. Agents can get get hostname[1] 
and tunnel_ip of other agents through tunnel_update(and agents can locally save 
this mapping).
  As we are getting port's FDB trough these API, we can build the ovs flows 
without l2pop FDB.

  L2pop driver uses port context's original_port and new_port, to identify 
changes to port's FDB. In the new implementation, agent can save port's 
FDB(only required fields), so that new port update can be always compared with 
saved FDB, and then identify changes to port's FDB.
  We can use port's revision_number to maintain order of port's updates.

  When l2 agent adds first port on a network(i.e Provisions a local VLAN
  for the network), it can request server with a new RPC call to provide
  all network port's FDB on other agents. Then it can create flows to
  all these existing ports.

  
  Implications:
  
  DVR - Currently when DVR router port is bound, it notifies all agents[2]. But 
server will set port's host_id  to ''[3]. Need to change it to calling host and 
check the implications.

  HA - Port's host_id will always be set to master host. This allows
  other agents to create flows to master host only. Need to update HA to
  use existing DVR multiple portbinding approach, to allow other agents
  to create flows to all HA agents.

  Other TODO:
  1) In existing implementation, port status updates(update_port_status) won't 
notify agent. Need to enhance it.

  
  Advantages:
  ==
  1) Existing l2pop driver code to identify 'active port count' on agent with 
multiple servers can be buggy[4].
  2) L2pop driver identifies first port on a agent and notifies it other port's 
FDB through RPC. If this RPC is not reaching the agent for any reason(example, 
that rpc worker dead),then agents can never establish tunnels and flows to 
other agents.
  3) We can remove additional l2pop mechanism driver on the server. Also can 
avoid separate l2pop RPC consumer threads on agent and related concurrency 
issues[5].

  
  [1] need to patch type_tunnel.py to send hostname as argument for 
tunnel_update.
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1564
  [3] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L542
  [4] Got stabilized with https://review.openstack.org/#/c/300539/
  [5] https://bugs.launchpad.net/neutron/+bug/1611308

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630981/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1997567] Re: [ovn-octavia-provider] Octavia LB stuck in PENDING_UPDATE after creation

2022-12-12 Thread Fernando Royo
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1997567

Title:
  [ovn-octavia-provider] Octavia LB stuck in PENDING_UPDATE after
  creation

Status in neutron:
  Invalid

Bug description:
  Wallaby OpenStack deployment using OpenStack Kayobe

  Running on Ubuntu Focal

  Relevant package version:

  - octavia-lib: 2.3.1
  - neutron-lib: 2.10.2
  - ovn-octavia-provider: 1.0.2.dev5

  I have encountered a bug where after creating an Octavia load balancer
  it gets stuck and cannot be deleted.

  Attempt in Horizon to delete the load balancer are met with the following
  Error: Unable to delete Load Balancer: test_ovn_lb. It also reports 
Provisioning Status: Pending Update.

  When attempting to delete via the openstack client I get this
  response.

  (openstackclient-venv) [~] openstack loadbalancer delete  
64951486-8143-4a17-a88b-9f576688e662
  Validation failure: Cannot delete Load Balancer 
64951486-8143-4a17-a88b-9f576688e662 - it has children (HTTP 400) (Request-ID: 
req-5bf53e03-d33d-4995-88fb-8617060afdf4)

  (openstackclient-venv) [~] openstack loadbalancer delete  
64951486-8143-4a17-a88b-9f576688e662 --cascade
  Invalid state PENDING_UPDATE of loadbalancer resource 
64951486-8143-4a17-a88b-9f576688e662 (HTTP 409) (Request-ID: 
req-cd917d82-67cd-4704-b6d2-032939e08d88)

  In the octavia-api.log the following error message is logged in the
  moments prior to getting stuck in this state.
  https://paste.opendev.org/show/bkKWy2WkjC9fo05kOFE3/

  The only solution to this problem that I have found that works is to
  edit the Octavia table to change the current Pending State to ERROR.

  use octavia
  UPDATE load_balancer SET provisioning_status = 'ERROR' WHERE 
provisioning_status LIKE "PENDING_UPDATE";

  This manual edit of the database then allows for the removal of the
  load balancer via the API:

  openstack loadbalancer delete id-here --cascade

  This bug is not blocking however it would nice to prevent this from
  happening again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1997567/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988157] Re: cloud-init crash on EC2 datasources when IMDS returns an error

2022-12-12 Thread James Falcon
** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1988157

Title:
  cloud-init crash on EC2 datasources when IMDS returns an error

Status in cloud-init:
  Fix Released

Bug description:
  Hello,

  We are using the EC2 datasource for crawling the metadata and in our
  cloud provider, if the IMDS returns an 404 error on one metadata
  resource, cloud-init crashes and the setup fails.

  Here is the configuration
  ```/etc/cloud/cloud.cfg.d/99_metadata.cfg
  disable-ec2-metadata: false

  datasource_list: [ Ec2 ]
  datasource:
Ec2:
  strict_id: false
  metadata_urls: [ 'http://169.254.169.254:80' ]
  timeout: 5
  max_wait: 10
  ```

  Here is the log of the error
  ```
  [   23.223228] cloud-init[576]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 
'init' at Tue, 30 Aug 2022 11:43:36 +. Up 15.96 seconds.
  [   23.224719] cloud-init[576]: ci-info: 
+++Net device 
info+++
  [   23.226427] cloud-init[576]: ci-info: 
++--+--+---++---+
  [   23.228137] cloud-init[576]: ci-info: | Device |  Up  |   Address  
  |  Mask | Scope  | Hw-Address|
  [   23.230390] cloud-init[576]: ci-info: 
++--+--+---++---+
  [   23.232965] cloud-init[576]: ci-info: |  eth0  | True | 
10.9.42.189  | 255.255.255.0 | global | aa:03:94:21:c3:a1 |
  [   23.235247] cloud-init[576]: ci-info: |  eth0  | True | 
fe80::a803:94ff:fe21:c3a1/64 |   .   |  link  | aa:03:94:21:c3:a1 |
  [   23.250295] cloud-init[576]: ci-info: |   lo   | True |  127.0.0.1 
  |   255.0.0.0   |  host  | . |
  [   23.255043] cloud-init[576]: ci-info: |   lo   | True |   ::1/128  
  |   .   |  host  | . |
  [   23.256681] cloud-init[576]: ci-info: 
++--+--+---++---+
  [   23.258318] cloud-init[576]: ci-info: +Route 
IPv4 info+
  [   23.259755] cloud-init[576]: ci-info: 
+---+-+---+-+---+---+
  [   23.261224] cloud-init[576]: ci-info: | Route | Destination |  Gateway  |  
   Genmask | Interface | Flags |
  [   23.262683] cloud-init[576]: ci-info: 
+---+-+---+-+---+---+
  [   23.264190] cloud-init[576]: ci-info: |   0   |   0.0.0.0   | 10.9.42.3 |  
   0.0.0.0 |eth0   |   UG  |
  [   23.265660] cloud-init[576]: ci-info: |   1   |  10.9.42.0  |  0.0.0.0  |  
255.255.255.0  |eth0   |   U   |
  [   23.267324] cloud-init[576]: ci-info: |   2   |  10.9.42.3  |  0.0.0.0  | 
255.255.255.255 |eth0   |   UH  |
  [   23.277516] cloud-init[576]: ci-info: 
+---+-+---+-+---+---+
  [   23.279090] cloud-init[576]: ci-info: +++Route IPv6 
info+++
  [   23.280396] cloud-init[576]: ci-info: 
+---+-+-+---+---+
  [   23.281694] cloud-init[576]: ci-info: | Route | Destination | Gateway | 
Interface | Flags |
  [   23.283008] cloud-init[576]: ci-info: 
+---+-+-+---+---+
  [   23.284347] cloud-init[576]: ci-info: |   1   |  fe80::/64  |::   |
eth0   |   U   |
  [   23.285660] cloud-init[576]: ci-info: |   3   |local|::   |
eth0   |   U   |
  [   23.286969] cloud-init[576]: ci-info: |   4   |  multicast  |::   |
eth0   |   U   |
  [   23.288342] cloud-init[576]: ci-info: 
+---+-+-+---+---+
  [   23.289631] cloud-init[576]: 2022-08-30 11:43:44,207 - util.py[WARNING]: 
Failed fetching meta-data/ from url 
http://169.254.169.254:80/2016-09-02/meta-data/
  [   23.291570] cloud-init[576]: 2022-08-30 11:43:44,216 - util.py[WARNING]: 
Getting data from  failed
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1988157/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1982857] Re: cloud-init does not provide configurable network activator priority overrides

2022-12-12 Thread James Falcon
** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1982857

Title:
  cloud-init does not provide configurable network activator priority
  overrides

Status in cloud-init:
  Fix Released
Status in subiquity:
  New

Bug description:
  cloud-init has two interactions with network backplanes

  1. to write (or render) network configuration to the approapriate
  config files for the network system: systemd, netplan, network-
  manager, ENI, freebsd, netbsd, openbsd. This is done via
  cloudinit.net.renderers discovery[1]

  2. optionally to bring up the network configuration via "network
  activation" for datasources discovered only in init boot stage after
  network is already up[2]

  /etc/cloud/cloud.cfg allows system_info:network:renderers to configure
  overrides for default renderers, but not for activators. The two
  discovery/mechanisms don't know about each other and have separate
  logic to determine which is applicable on the given system.

  Cloud-init should either:
   - Expose system_info: network: activators discovery priorty/order 
configuration in /etc/cloud/cloud.cfg*

   -- OR --

   - make activators aware of customized/overridden renderers priority
  from cloud.cfg and honor that priority order when discovering
  activators to use.

  Without this feature, overridden network: renderer priority order to
  set network-manager as default renderer will result in cloud-init
  writing /etc/NetworkManager/system-connections/cloud-
  init-.nmconnection but then trying to run `netplan apply`
  for a non-existent configuration on ubuntu Desktop installs.

  References:
  [1] network renderers discovery: 
https://github.com/canonical/cloud-init/blob/main/cloudinit/net/renderers.py#L67
  [2] network activators discovery: 
https://github.com/canonical/cloud-init/blob/main/cloudinit/net/activators.py#L279
  [3] renderer overrides honored: 
https://github.com/canonical/cloud-init/blob/main/cloudinit/distros/__init__.py#L122-L126
  [4] No activator overrides: 
https://github.com/canonical/cloud-init/blob/main/cloudinit/distros/__init__.py#L246

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1982857/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1987005] Re: ipv6_ready referenced before assignment

2022-12-12 Thread James Falcon
** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1987005

Title:
  ipv6_ready referenced before assignment

Status in cloud-init:
  Fix Released

Bug description:
  cloud-init crashes due to reference ipv6_ready before assignment. 
  cloud-init version: 22.2.2-1.ph3

  traceback in cloudinit/sources/DataSourceVMware.py:

  [2022-08-15 17:38:14] 2022-08-15 17:38:14,682 - util.py[WARNING]:
  failed stage init

  [2022-08-15 17:38:14] failed run of stage init

  [2022-08-15 17:38:14]
  

  [2022-08-15 17:38:14] Traceback (most recent call last):

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/cmd/main.py", line 740, in status_wrapper

  [2022-08-15 17:38:14] ret = functor(name, args)

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/cmd/main.py", line 429, in main_init

  [2022-08-15 17:38:14] init.setup_datasource()

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/stages.py", line 468, in setup_datasource

  [2022-08-15 17:38:14]
  self.datasource.setup(is_new_instance=self.is_new_instance())

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/sources/DataSourceVMware.py", line 340, in setup

  [2022-08-15 17:38:14] host_info = wait_on_network(self.metadata)

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/sources/DataSourceVMware.py", line 963, in
  wait_on_network

  [2022-08-15 17:38:14] ipv6_ready,

  [2022-08-15 17:38:14] UnboundLocalError: local variable 'ipv6_ready'
  referenced before assignment


  There is an issue in the source code: under certain conditions,
  ipv6_ready may be referenced in LOG.debug() before assignment if
  wait_on_ipv6 = false. The same issue may also happen for ipv4_ready if
  wait_on_ipv4 = false.


  host_info = None

  while host_info is None:

  # This loop + sleep results in two logs every second while
  waiting

  # for either ipv4 or ipv6 up. Do we really need to log each
  iteration

  # or can we log once and log on successful exit?

  host_info = get_host_info()


  network = host_info.get("network") or {}

  interfaces = network.get("interfaces") or {}

  by_ipv4 = interfaces.get("by-ipv4") or {}

  by_ipv6 = interfaces.get("by-ipv6") or {}


  if wait_on_ipv4:

  ipv4_ready = len(by_ipv4) > 0 if by_ipv4 else False

  if not ipv4_ready:

  host_info = None


  if wait_on_ipv6:

  ipv6_ready = len(by_ipv6) > 0 if by_ipv6 else False

  if not ipv6_ready:

  host_info = None


  if host_info is None:

  LOG.debug(

  "waiting on network: wait4=%s, ready4=%s, wait6=%s,
  ready6=%s",

  wait_on_ipv4,

  ipv4_ready,

  wait_on_ipv6,

  ipv6_ready,

  )

  time.sleep(1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1987005/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978543] Re: default not accepted as destination in routes: in the network-config

2022-12-12 Thread James Falcon
** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1978543

Title:
  default not accepted as destination in routes: in the network-config

Status in cloud-init:
  Fix Released

Bug description:
  I have this network-config file (which is valid as per netplan

  version: 2
  ethernets:
enp0s3:
  dhcp4: false
  addresses: [10.0.4.10/24]
  nameservers:
addresses: [10.0.4.1]
  routes:
  - to: default
via: 10.0.4.1
metric: 100

  And cloud-config bumps on the 'default':

  022-06-14 10:30:10,662 - util.py[WARNING]: failed stage init-local
  failed run of stage init-local
  
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 738, in 
status_wrapper
  ret = functor(name, args)
File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 410, in 
main_init
  ...
File "/usr/lib/python3/dist-packages/cloudinit/net/network_state.py", line 
953
  , in _normalize_net_keys
  raise ValueError(f"Address {addr} is not a valid ip address")
  ValueError: Address default is not a valid ip address
  

  student@osm11:~$ uname -a
  Linux osm11 5.4.0-117-generic #132-Ubuntu SMP Thu Jun 2 00:39:06 UTC 2022 
x86_64 x86_64 x86_64 GNU/Linux
  student@osm11:~$ cat /etc/lsb-release 
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=20.04
  DISTRIB_CODENAME=focal
  DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"

  
  With the cloud-enabled OVA from
  https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.ova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1978543/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1966533] Re: fqdn does not accept terminal dot

2022-12-12 Thread James Falcon
** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1966533

Title:
  fqdn does not accept terminal dot

Status in cloud-init:
  Fix Released

Bug description:
  A fully qualified domain is not

  foo.bar.com

  But instead,

  foo.bar.com.

  Otherwise the fqdn hit up the resolver and searches on the network for
  a matching host. So if the resolver is on network

  foobar.com.

  And you query for `foo.bar.com` it'll look up a host by that name

  foo.bar.com.foobar.com.

  To stop this you fully-qualify with a terminal dot. This creates a
  problem because hostname does not accept a terminal dot,

  sudo hostname host-10-2-65-89.openstack.build.
  hostname: the specified hostname is invalid

  And cloud-init blindly submits the input to hostname. Which is weird,
  because you have to supply an fqdn which actually can _not_ currently
  be a fqdn. The desired fix for this would be to trim off the dot from
  the value supplied in fqdn, before providing it to hostname.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1966533/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999324] Re: A large number of issues are not closed

2022-12-12 Thread James Falcon
Thanks for letting us know, shixuantong. While we do have scripts to
auto-close issues upon release, there are bugs that fall through the
cracks. I have updated the listed bugs accordingly.

Since this report itself isn't actually a bug, I'm going to close it as
invalid.

** Changed in: cloud-init
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1999324

Title:
  A large number of issues are not closed

Status in cloud-init:
  Invalid

Bug description:
  A large number of issues are not closed. I find that some issues are
  resolved, but they are still open.

  For example:

  https://bugs.launchpad.net/cloud-init/+bug/1988157
  https://bugs.launchpad.net/cloud-init/+bug/1987005
  https://bugs.launchpad.net/cloud-init/+bug/1982857
  https://bugs.launchpad.net/cloud-init/+bug/1978543
  https://bugs.launchpad.net/cloud-init/+bug/1966533
  https://bugs.launchpad.net/cloud-init/+bug/1922801

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1999324/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999400] [NEW] neutron-metadata-agent does not sometimes provide instance-id

2022-12-12 Thread Lukas Piwowarski
Public bug reported:

Neutron-metadata-agent does not *sometimes* reply to instance-id request
by CirrOS VM in OpenStack environment created by Devstack. This causes a
failure of upstream jobs (e.g. tempest-full-multinode-py3 [1]) as some
tests are not able to successfully finish creation of a VM due to
missing response from http://169.254.169.254/meta-data/instance-id (see
attached log file).

[1]
https://zuul.opendev.org/t/openstack/build/94f0cf28c8fd495d96099f888329dbb2/logs

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "Example of failed tempest-full-multinode-py3 job (search 
for checking http://169.254.169.254/2009-04-04/instance-id)"
   
https://bugs.launchpad.net/bugs/1999400/+attachment/5635397/+files/job-output.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999400

Title:
  neutron-metadata-agent does not sometimes provide instance-id

Status in neutron:
  New

Bug description:
  Neutron-metadata-agent does not *sometimes* reply to instance-id
  request by CirrOS VM in OpenStack environment created by Devstack.
  This causes a failure of upstream jobs (e.g. tempest-full-multinode-
  py3 [1]) as some tests are not able to successfully finish creation of
  a VM due to missing response from http://169.254.169.254/meta-
  data/instance-id (see attached log file).

  [1]
  
https://zuul.opendev.org/t/openstack/build/94f0cf28c8fd495d96099f888329dbb2/logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999400/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1994056] Re: nova-api does not support config dirs when run under apache via mod_wsgi

2022-12-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/864014
Committed: 
https://opendev.org/openstack/nova/commit/73fe84fa0ea6f7c7fa55544f6bce5326d87743a6
Submitter: "Zuul (22348)"
Branch:master

commit 73fe84fa0ea6f7c7fa55544f6bce5326d87743a6
Author: Sean Mooney 
Date:   Tue Nov 8 15:00:22 2022 +

Support multiple config file with mod_wsgi

Unlike uwsgi, apache mod_wsgi does not support passing
commandline arguments to the python wsgi script it invokes.

As a result while you can pass --config-file when hosting the
api and metadata wsgi applications with uwsgi there is no
way to use multiple config files with mod_wsgi.

This change mirrors how this is supported in keystone today
by intoducing a new OS_NOVA_CONFIG_FILES env var to allow
operators to optional pass a ';' delimited list of config
files to load.

This change also add docs for this env var and the existing
undocumented OS_NOVA_CONFIG_DIR.

Closes-Bug: 1994056
Change-Id: I8e3ccd75cbb7f2e132b403cb38022787c2c0a37b


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1994056

Title:
  nova-api does not support config dirs when run under apache via
  mod_wsgi

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  currently nova assume that when the nova-api si run under a wsgi server that 
server supports
  passing command line arguments to the wsgi script.

  that is not the case with mod_wsgi

  as a result we do not support using a config director when running
  under mod_wsgi or passing any other arguments.

  as a result when we run nova-api under mod_wsgi we fallback to a
  hardcoded set of config file names

  
https://github.com/openstack/nova/blob/b1958b7cfa6b8aca5b76b3f133627bb733d29f00/nova/api/openstack/wsgi_app.py#L34-L46

  CONFIG_FILES = ['api-paste.ini', 'nova.conf']

  LOG = logging.getLogger(__name__)

  objects.register_all()

  
  def _get_config_files(env=None):
  if env is None:
  env = os.environ
  dirname = env.get('OS_NOVA_CONFIG_DIR', '/etc/nova').strip()
  return [os.path.join(dirname, config_file)
  for config_file in CONFIG_FILES]

  
  This prevents operators form using the /etc/nova/nova.config.d/
  to provide a directory containing multiple config files 

  This can be addressed in several ways.

  first we can provide a env varabel for additional command line args to be 
parsed 
  these can be parsed here as we do for uwsgi or the python wsgi server
  
https://github.com/openstack/nova/blob/b1958b7cfa6b8aca5b76b3f133627bb733d29f00/nova/api/openstack/wsgi_app.py#L96-L98

  second we could replace or augment our custom _get_config_files with a
  call to the genirc implementation provide in oslo.config

  
https://github.com/openstack/oslo.config/blob/68cefad313bd03522e99b3de95f1786ebea45d4b/oslo_config/cfg.py#L281-L339

  thrid we can provide a way to extend 
  CONFIG_FILES = ['api-paste.ini', 'nova.conf']
  via a new env var.
  e.g. 
OS_NOVA_EXTRA_CONFIGS="nova.config.d/01-nova.conf,nova.config.d/02-nova-secret.conf,"

  we can do all three or any one of them to enable the usecase of
  supporting config directories although only the first option allows
  other command line args to be passed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1994056/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1995732] Re: bulk port create: TypeError: Bad prefix type for generating IPv6 address by EUI-64

2022-12-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/863881
Committed: 
https://opendev.org/openstack/neutron/commit/f7dd7790f5c6e3149af4680ba521089328d1eb0c
Submitter: "Zuul (22348)"
Branch:master

commit f7dd7790f5c6e3149af4680ba521089328d1eb0c
Author: elajkat 
Date:   Fri Nov 4 16:51:03 2022 +0100

Fix bulk create without mac

Bulk port create without mac address fails as when Neutron calls
oslo_utils.netutils.get_ipv6_addr_by_EUI64, as the mac field of the port
is an ATTR_NOT_SPECIFIED Sentinel() object.
With some reshuffling of the code to fill the mac field this can be
fixed.

Closes-Bug: #1995732
Related-Bug: #1954763

Change-Id: Id594003681f4755d8fd1af3b98e281c3109420f6


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1995732

Title:
  bulk port create: TypeError: Bad prefix type for generating IPv6
  address by EUI-64

Status in neutron:
  Fix Released

Bug description:
  source openrc admin admin
  export TOKEN="$( openstack token issue -f value -c id )"

  A single port create succeeds:
  curl -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -d 
"{\"port\":{\"name\":\"port0\",\"network_id\":\"$( openstack net show private 
-f value -c id )\"}}" -X POST http://127.0.0.1:9696/networking/v2.0/ports | 
json_pp
  ...

  But the same request via the bulk api fails:
  curl -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -d 
"{\"ports\":[{\"name\":\"port0-via-bulk\",\"network_id\":\"$( openstack net 
show private -f value -c id )\"}]}" -X POST 
http://127.0.0.1:9696/networking/v2.0/ports | json_pp
  {
 "NeutronError" : {
"detail" : "",
"message" : "Request Failed: internal server error while processing 
your request.",
"type" : "HTTPInternalServerError"
 }
  }

  While in q-svc logs we have:
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation [None 
req-f5c79830-013a-4ae2-8c47-2102b20299e1 admin admin] POST failed.: TypeError: 
Bad prefix type for generating IPv6 address by EUI-64: fdd6:813:349::/64
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation Traceback (most recent call last):
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python3.10/dist-packages/oslo_utils/netutils.py", line 210, in 
get_ipv6_addr_by_EUI64
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation eui64 = int(netaddr.EUI(mac).eui64())
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python3.10/dist-packages/netaddr/eui/__init__.py", line 389, in 
__init__
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation self.value = addr
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python3.10/dist-packages/netaddr/eui/__init__.py", line 425, in 
_set_value
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation self._value = module.str_to_int(value)
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python3.10/dist-packages/netaddr/strategy/eui48.py", line 178, 
in str_to_int
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation raise TypeError('%r is not str() or 
unicode()!' % (addr,))
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation TypeError:  is not str() or unicode()!
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation During handling of the above exception, 
another exception occurred:
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation Traceback (most recent call last):
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python3.10/dist-packages/pecan/core.py", line 693, in __call__
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation self.invoke_controller(controller, 
args, kwargs, state)
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python3.10/dist-packages/pecan/core.py", line 584, in 
invoke_controller
  nov 04 15:56:52 devstack0 neutron-server[101377]: ERROR 
neutron.pecan_wsgi.hook

[Yahoo-eng-team] [Bug 1997922] Re: RuntimeError: duplicate mac found! both 'swp1' and 'swp3' have mac '32:98:88:9c:2d:29'

2022-12-12 Thread Robert Liu
** Also affects: oem-priority
   Importance: Undecided
   Status: New

** Changed in: oem-priority
   Importance: Undecided => High

** Tags added: originate-from-1998894

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1997922

Title:
  RuntimeError: duplicate mac found! both 'swp1' and 'swp3' have mac
  '32:98:88:9c:2d:29'

Status in cloud-init:
  Incomplete
Status in OEM Priority Project:
  New

Bug description:
  Hi,

  This is Aristo from OEM Enablement team in Taiwan, I am currently enabling a 
device that has 1 Ethernet port and 4 Etherent switch port, and I will get the 
following error on first boot
  """
  [   22.855169] cloud-init[519]: Cloud-init v. 22.3.4-0ubuntu1~22.04.1 running 
'init-local' at Fri, 25 Nov 2022 01:23:27 +. Up 22.75 seconds.
  [   23.745575] cloud-init[519]: 2022-11-25 01:23:28,899 - util.py[WARNING]: 
failed stage init-local
  [   23.764650] cloud-init[519]: failed run of stage init-local
  [   23.780376] cloud-init[519]: 

  [   23.796379] cloud-init[519]: Traceback (most recent call last):
  [   23.812604] cloud-init[519]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 767, in 
status_wrapper
  [   23.832472] cloud-init[519]: ret = functor(name, args)
  [   23.848500] cloud-init[519]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 433, in main_init
  [   23.869966] cloud-init[519]: 
init.apply_network_config(bring_up=bring_up_interfaces)
  [   23.888410] cloud-init[519]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 922, in 
apply_network_config
  [   23.908494] cloud-init[519]: 
self.distro.networking.wait_for_physdevs(netcfg)
  [   23.928436] cloud-init[519]:   File 
"/usr/lib/python3/dist-packages/cloudinit/distros/networking.py", line 148, in 
wait_for_physdevs
  [   23.952413] cloud-init[519]: present_macs = 
self.get_interfaces_by_mac().keys()
  [   23.972380] cloud-init[519]:   File 
"/usr/lib/python3/dist-packages/cloudinit/distros/networking.py", line 75, in 
get_interfaces_by_mac
  [   23.996508] cloud-init[519]: return net.get_interfaces_by_mac(
  [   24.012399] cloud-init[519]:   File 
"/usr/lib/python3/dist-packages/cloudinit/net/__init__.py", line 926, in 
get_interfaces_by_mac
  [   24.036393] cloud-init[519]: return get_interfaces_by_mac_on_linux(
  [   24.056387] cloud-init[519]:   File 
"/usr/lib/python3/dist-packages/cloudinit/net/__init__.py", line 1007, in 
get_interfaces_by_mac_on_linux
  [   24.080426] cloud-init[519]: raise RuntimeError(
  [   24.099221] cloud-init[519]: RuntimeError: duplicate mac found! both 
'swp1' and 'swp3' have mac '9a:57:7d:78:47:c0'
  [   24.120454] cloud-init[519]: 


  """

  
  The network-config is
  """
  #cloud-config
  version: 2
  ethernets:
enp0s0f0:
  dhcp4: true
  optional: true
  """

  
  Here is all the interfaces
  """
  $ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: enp0s0f0:  mtu 1500 qdisc noop state DOWN group 
default qlen 1000
  link/ether 16:11:29:db:df:62 brd ff:ff:ff:ff:ff:ff
  3: enp0s0f2:  mtu 1520 qdisc noop state DOWN group 
default qlen 1000
  link/ether 9a:57:7d:78:47:c0 brd ff:ff:ff:ff:ff:ff
  4: can0:  mtu 16 qdisc noop state DOWN group default qlen 10
  link/can 
  5: can1:  mtu 16 qdisc noop state DOWN group default qlen 10
  link/can 
  6: swp0@enp0s0f2:  mtu 1500 qdisc noop state DOWN 
group default qlen 1000
  link/ether 9a:57:7d:78:47:c0 brd ff:ff:ff:ff:ff:ff
  7: swp1@enp0s0f2:  mtu 1500 qdisc noop state DOWN 
group default qlen 1000
  link/ether 9a:57:7d:78:47:c0 brd ff:ff:ff:ff:ff:ff
  8: swp2@enp0s0f2:  mtu 1500 qdisc noop state DOWN 
group default qlen 1000
  link/ether 9a:57:7d:78:47:c0 brd ff:ff:ff:ff:ff:ff
  9: swp3@enp0s0f2:  mtu 1500 qdisc noop state DOWN 
group default qlen 1000
  link/ether 9a:57:7d:78:47:c0 brd ff:ff:ff:ff:ff:ff

  """

  Please let me know if you need any further info from me, thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1997922/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999476] [NEW] Schema for "users" key is too strict

2022-12-12 Thread Robie Basak
Public bug reported:

Against cc_users_groups, I have some configurations that use:

#cloud-config
disable_root: false
users: []

This used to work. I think it's a reasonable configuration: "please make
root work, and don't create a non-root user". I'm doing this in order to
have management of users/groups done later, not by cloud-init, on these
instances.

On booting Ubuntu 22.04 (cloud-init 22.3.4-0ubuntu1~22.04.1), I got:

[   30.621713] cloud-init[505]: 2022-12-13 03:35:09,073 -
schema.py[WARNING]: Invalid cloud-config provided: Please run 'sudo
cloud-init schema --system' to see the schema errors.

# cloud-init schema --system
Cloud config schema deprecations: 
Error:
Cloud config schema errors: users: [] is too short

This seems to be caused by cloudinit/config/schemas/schema-cloud-
config-v1.json specifying "minItems": 1 against the "users" key.

Expected behaviour: the above configuration works as before, with root
login permitted but no other users created.

Actual behaviour: unexpected schema warning. However I did get root
access as desired, and the ubuntu user does exist. So perhaps the
validation isn't enforced yet?

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1999476

Title:
  Schema for "users" key is too strict

Status in cloud-init:
  New

Bug description:
  Against cc_users_groups, I have some configurations that use:

  #cloud-config
  disable_root: false
  users: []

  This used to work. I think it's a reasonable configuration: "please
  make root work, and don't create a non-root user". I'm doing this in
  order to have management of users/groups done later, not by cloud-
  init, on these instances.

  On booting Ubuntu 22.04 (cloud-init 22.3.4-0ubuntu1~22.04.1), I got:

  [   30.621713] cloud-init[505]: 2022-12-13 03:35:09,073 -
  schema.py[WARNING]: Invalid cloud-config provided: Please run 'sudo
  cloud-init schema --system' to see the schema errors.

  # cloud-init schema --system
  Cloud config schema deprecations: 
  Error:
  Cloud config schema errors: users: [] is too short

  This seems to be caused by cloudinit/config/schemas/schema-cloud-
  config-v1.json specifying "minItems": 1 against the "users" key.

  Expected behaviour: the above configuration works as before, with root
  login permitted but no other users created.

  Actual behaviour: unexpected schema warning. However I did get root
  access as desired, and the ubuntu user does exist. So perhaps the
  validation isn't enforced yet?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1999476/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp