[Yahoo-eng-team] [Bug 2045383] [NEW] [xena] nftables periodic jobs fails with RETRY_LIMIT

2023-11-30 Thread yatin
Public bug reported:

Since 18th Nov nftables jobs failing with RETRY_LIMIT and no logs are
available. These jobs quickly fails and last task seen running in
console is "Preparing job workspace".

This looks a regression with zuul change
https://review.opendev.org/c/zuul/zuul/+/900489.

@fungi checked executor logs and it fails as:-
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: [e: 
a4e70e9b208f479e98334c09305d2013] [build: 3c41d75624274d2e8d7fd62ae332c31d] 
Exception while executing job
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   Traceback (most recent call 
last):
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/git/util.py", line 1125, in __getitem__
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   return getattr(self, index)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:  
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/git/util.py", line 1114, in __getattr__
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   return 
list.__getattribute__(self, attr)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:  
^
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   AttributeError: 'IterableList' 
object has no attribute '2.3.0'
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   The above exception was the 
direct cause of the following exception:
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   Traceback (most recent call 
last):
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 1185, 
in do_execute
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   self._execute()
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 1556, 
in _execute
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   self.preparePlaybooks(args)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2120, 
in preparePlaybooks
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.preparePlaybook(jobdir_playbook, playbook, args)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2177, 
in preparePlaybook
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.prepareRole(jobdir_playbook, role, args)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2367, 
in prepareRole
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.prepareZuulRole(jobdir_playbook, role, args, role_info)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2423, 
in prepareZuulRole
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   path = 
self.checkoutUntrustedProject(project, branch, args)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:  

2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2295, 
in checkoutUntrustedProject
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
repo.getBranchHead(branch).hexsha
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   ^^
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/merger/merger.py", line 465, in 
getBranchHead
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   branch_head = 
repo.heads[branch]
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: 
~~
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/git/util.py", line 1127, in __getitem__
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   raise IndexError("No item 
found with id %r" % (self._prefix + index)) from e
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   IndexError: No item found with 
id '2.3.0'

The parent job[1] which does pins neutron-tempest-plugin to 2.3.0 also
do pass but not the inherited ones, same can be seen in the test
patch[2]

Builds:- https://zuul.openstack.org/builds?job_name=neutron-linuxbridge-
tempest-plugin-scenario-nftables&job_name=neutron-ovs-tempest-plugin-
scenario-iptables_hybrid-
nftables&project=openstack%2Fneutron&branch=stable%2Fxena&skip=0

[1] 
https://zuul.openstack.org/builds?job_name=neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-xena
[2] https://review.opendev.org/c/openstack/neutron/+/902296

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to ne

[Yahoo-eng-team] [Bug 2045287] Re: Error in charm when bootstrapping OpenStack with Sunbeam

2023-11-30 Thread Takashi Kajinami
This does not really apear as a neutron problem now, and should be
investigated from Sunbeam's perspective until the problem is narrowed
down to something wrong with Neutron.

** Project changed: neutron => charm-neutron-api

** Project changed: charm-neutron-api => charm-ops-sunbeam

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045287

Title:
  Error in charm when bootstrapping OpenStack with Sunbeam

Status in Ops Sunbeam:
  New

Bug description:
  When trying to bootstrap OpenStack using Sunbeam, there is an error in
  the charm of neutron, which looks like the following:

  * In app view:

  neutronwaiting  1
  neutron-k8s2023.1/stable   53  10.152.183.36   no
  installing agent

  * In unit view:

  neutron/0*   blocked   idle   10.1.163.29
  (workload) Error in charm (see logs): timed out waiting for change 2
  (301 seconds)

  * Looking at the logs for the error, they look like this:

  2023-11-30T09:27:16.903Z [container-agent] 2023-11-30 09:27:16 INFO juju-log 
identity-service:85: Syncing database...
  2023-11-30T09:32:18.000Z [container-agent] 2023-11-30 09:32:18 ERROR juju-log 
identity-service:85: Exception raised in section 'Bootstrapping': timed out 
waiting for change 2 (301 seconds)
  2023-11-30T09:32:18.008Z [container-agent] 2023-11-30 09:32:18 ERROR juju-log 
identity-service:85: Traceback (most recent call last):
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/guard.py", line 91, 
in guard
  2023-11-30T09:32:18.008Z [container-agent] yield
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
265, in configure_charm
  2023-11-30T09:32:18.008Z [container-agent] self.configure_unit(event)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
479, in configure_unit
  2023-11-30T09:32:18.008Z [container-agent] self.run_db_sync()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/job_ctrl.py", line 
74, in wrapped_f
  2023-11-30T09:32:18.008Z [container-agent] f(charm, *args, **kwargs)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/./src/charm.py", line 302, in 
run_db_sync
  2023-11-30T09:32:18.008Z [container-agent] super().run_db_sync()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/job_ctrl.py", line 
74, in wrapped_f
  2023-11-30T09:32:18.008Z [container-agent] f(charm, *args, **kwargs)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
549, in run_db_sync
  2023-11-30T09:32:18.008Z [container-agent] self._retry_db_sync(cmd)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
289, in wrapped_f
  2023-11-30T09:32:18.008Z [container-agent] return self(f, *args, **kw)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
379, in __call__
  2023-11-30T09:32:18.008Z [container-agent] do = 
self.iter(retry_state=retry_state)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
314, in iter
  2023-11-30T09:32:18.008Z [container-agent] return fut.result()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
  2023-11-30T09:32:18.008Z [container-agent] return self.__get_result()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
  2023-11-30T09:32:18.008Z [container-agent] raise self._exception
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
382, in __call__
  2023-11-30T09:32:18.008Z [container-agent] result = fn(*args, **kwargs)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
529, in _retry_db_sync
  2023-11-30T09:32:18.008Z [container-agent] out, warnings = 
process.wait_output()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops/pebble.py", line 1354, in 
wait_output
  2023-11-30T09:32:18.008Z [container-agent] exit_code: int = self._wait()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops/pebble.py", line 1294, in 
_wait
  2023-11-30T09:

[Yahoo-eng-team] [Bug 2045287] [NEW] Error in charm when bootstrapping OpenStack with Sunbeam

2023-11-30 Thread Brian G. Rodiles Delgado
Public bug reported:

When trying to bootstrap OpenStack using Sunbeam, there is an error in
the charm of neutron, which looks like the following:

* In app view:

neutronwaiting  1
neutron-k8s2023.1/stable   53  10.152.183.36   no
installing agent

* In unit view:

neutron/0*   blocked   idle   10.1.163.29
(workload) Error in charm (see logs): timed out waiting for change 2
(301 seconds)

* Looking at the logs for the error, they look like this:

2023-11-30T09:27:16.903Z [container-agent] 2023-11-30 09:27:16 INFO juju-log 
identity-service:85: Syncing database...
2023-11-30T09:32:18.000Z [container-agent] 2023-11-30 09:32:18 ERROR juju-log 
identity-service:85: Exception raised in section 'Bootstrapping': timed out 
waiting for change 2 (301 seconds)
2023-11-30T09:32:18.008Z [container-agent] 2023-11-30 09:32:18 ERROR juju-log 
identity-service:85: Traceback (most recent call last):
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/guard.py", line 91, 
in guard
2023-11-30T09:32:18.008Z [container-agent] yield
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
265, in configure_charm
2023-11-30T09:32:18.008Z [container-agent] self.configure_unit(event)
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
479, in configure_unit
2023-11-30T09:32:18.008Z [container-agent] self.run_db_sync()
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/job_ctrl.py", line 
74, in wrapped_f
2023-11-30T09:32:18.008Z [container-agent] f(charm, *args, **kwargs)
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/./src/charm.py", line 302, in 
run_db_sync
2023-11-30T09:32:18.008Z [container-agent] super().run_db_sync()
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/job_ctrl.py", line 
74, in wrapped_f
2023-11-30T09:32:18.008Z [container-agent] f(charm, *args, **kwargs)
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
549, in run_db_sync
2023-11-30T09:32:18.008Z [container-agent] self._retry_db_sync(cmd)
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
289, in wrapped_f
2023-11-30T09:32:18.008Z [container-agent] return self(f, *args, **kw)
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
379, in __call__
2023-11-30T09:32:18.008Z [container-agent] do = 
self.iter(retry_state=retry_state)
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
314, in iter
2023-11-30T09:32:18.008Z [container-agent] return fut.result()
2023-11-30T09:32:18.008Z [container-agent]   File 
"/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
2023-11-30T09:32:18.008Z [container-agent] return self.__get_result()
2023-11-30T09:32:18.008Z [container-agent]   File 
"/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
2023-11-30T09:32:18.008Z [container-agent] raise self._exception
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
382, in __call__
2023-11-30T09:32:18.008Z [container-agent] result = fn(*args, **kwargs)
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
529, in _retry_db_sync
2023-11-30T09:32:18.008Z [container-agent] out, warnings = 
process.wait_output()
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops/pebble.py", line 1354, in 
wait_output
2023-11-30T09:32:18.008Z [container-agent] exit_code: int = self._wait()
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops/pebble.py", line 1294, in 
_wait
2023-11-30T09:32:18.008Z [container-agent] change = 
self._client.wait_change(self._change_id, timeout=timeout)
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops/pebble.py", line 1794, in 
wait_change
2023-11-30T09:32:18.008Z [container-agent] return 
self._wait_change_using_wait(change_id, timeout)
2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops/pebble.py", line 1820, in 
_wait_change_using_wait
2023-11-30T09:32:18.008Z [container-agent] raise TimeoutError(f'timed out 
waiting for change {change_id} ({timeout} seconds)')
2

[Yahoo-eng-team] [Bug 2045270] [NEW] [xena/yoga/zed] devstack-tobiko-neutron job broken

2023-11-30 Thread yatin
Public bug reported:

These jobs are broken as these jobs running on focal and running tests
which shouldn't on focal as per
https://review.opendev.org/c/openstack/neutron/+/871982 (included in
antelope+)


Example failure 
https://d5a9a78dc7c742990ec8-242e23f01f4a3c50d38acf4a24e0c600.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/yoga/devstack-tobiko-neutron/21d79ad/tobiko_results_02_create_neutron_resources_neutron.html

Builds:- https://zuul.openstack.org/builds?job_name=devstack-tobiko-
neutron&project=openstack%2Fneutron&branch=stable%2Fxena&branch=stable%2Fyoga&branch=stable%2Fzed&skip=0

That patch is not available before zed so for older branches need to
skip those tests in some other way

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045270

Title:
  [xena/yoga/zed] devstack-tobiko-neutron job broken

Status in neutron:
  New

Bug description:
  These jobs are broken as these jobs running on focal and running tests
  which shouldn't on focal as per
  https://review.opendev.org/c/openstack/neutron/+/871982 (included in
  antelope+)

  
  Example failure 
https://d5a9a78dc7c742990ec8-242e23f01f4a3c50d38acf4a24e0c600.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/yoga/devstack-tobiko-neutron/21d79ad/tobiko_results_02_create_neutron_resources_neutron.html

  Builds:- https://zuul.openstack.org/builds?job_name=devstack-tobiko-
  
neutron&project=openstack%2Fneutron&branch=stable%2Fxena&branch=stable%2Fyoga&branch=stable%2Fzed&skip=0

  That patch is not available before zed so for older branches need to
  skip those tests in some other way

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045270/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045237] [NEW] [bulk] there is no clean operation if bulk create ports fails

2023-11-30 Thread Liu Xie
Public bug reported:

When we use the bulk api to create ports use one subnet e.g., 
'935fe38e-743f-45e5-a646-4ebbbf16ade7', and then we found any errors occurred.
We use sql queries to find diffrent IPs between ipallocations and 
ipamallocations:

ip_address in ipallocations but not in ipamallocations:

MariaDB [neutron]> select count(*) from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7' && ip_address  not in (select 
ip_address from ipamallocations where ipam_subnet_id in (select id from  
ipamsubnets where neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7')) ;
+--+
| count(*) |
+--+
|  873 |
+--+
1 row in set (0.01 sec)

ip_address in ipamallocations but not in ipallocations:

MariaDB [neutron]> select count(*)  from ipamallocations where ipam_subnet_id 
in (select id from  ipamsubnets where 
neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7') && ip_address not in 
(select ip_address from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7');
+--+
| count(*) |
+--+
|   63 |
+--+
1 row in set (0.01 sec)

It seems that there are still resources remaining if ports creation
fails, and we cannot find the operation to clean up failed ports.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045237

Title:
  [bulk] there is no  clean operation if bulk create ports fails

Status in neutron:
  New

Bug description:
  When we use the bulk api to create ports use one subnet e.g., 
'935fe38e-743f-45e5-a646-4ebbbf16ade7', and then we found any errors occurred.
  We use sql queries to find diffrent IPs between ipallocations and 
ipamallocations:

  ip_address in ipallocations but not in ipamallocations:

  MariaDB [neutron]> select count(*) from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7' && ip_address  not in (select 
ip_address from ipamallocations where ipam_subnet_id in (select id from  
ipamsubnets where neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7')) ;
  +--+
  | count(*) |
  +--+
  |  873 |
  +--+
  1 row in set (0.01 sec)

  ip_address in ipamallocations but not in ipallocations:

  MariaDB [neutron]> select count(*)  from ipamallocations where ipam_subnet_id 
in (select id from  ipamsubnets where 
neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7') && ip_address not in 
(select ip_address from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7');
  +--+
  | count(*) |
  +--+
  |   63 |
  +--+
  1 row in set (0.01 sec)

  It seems that there are still resources remaining if ports creation
  fails, and we cannot find the operation to clean up failed ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045237/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp