[Yahoo-eng-team] [Bug 1934912] Re: Router update fails for ports with allowed_address_pairs containg IP range in CIDR notation

2021-10-20 Thread Chris MacNaughton
This bug was fixed in the package neutron - 2:18.1.1-0ubuntu2~cloud0
---

 neutron (2:18.1.1-0ubuntu2~cloud0) focal-wallaby; urgency=medium
 .
   * New upstream release for the Ubuntu Cloud Archive.
 .
 neutron (2:18.1.1-0ubuntu2) hirsute; urgency=medium
 .
   * d/p/lp1934912-set-arp-entries-only-for-single-ip.patch: Cherry-pick
 upstream patch (LP: #1934912)
 .
 neutron (2:18.1.1-0ubuntu1) hirsute; urgency=medium
 .
   [ Corey Bryant ]
   * d/control: Drop neutron-fwaas dependency as it is no longer maintained
 (LP: #1934129).
   * d/p/revert-rely-on-worker-count-for-hashring-caching.patch: Dropped.
 Fixed upstream by https://review.opendev.org/c/openstack/neutron/+/800679
 in the 18.1.1 stable release.
 .
   [ Chris MacNaughton ]
   * New stable point release for OpenStack Wallaby (LP: #1943709).
   * d/p/series: Remove reference to removed patch.


** Changed in: cloud-archive/wallaby
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1934912

Title:
  Router update fails for ports with allowed_address_pairs containg IP
  range in CIDR  notation

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive ussuri series:
  Fix Committed
Status in Ubuntu Cloud Archive victoria series:
  Fix Committed
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in Ubuntu Cloud Archive xena series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  Fix Released
Status in neutron source package in Hirsute:
  Fix Released
Status in neutron source package in Impish:
  Fix Released

Bug description:
  With https://review.opendev.org/c/openstack/neutron/+/792791 neutron build 
from branch `stable/train` fails to update routers with ports containing an 
`allowed_address_pair` containing an IP address range in CIDR notation, i.e.:
  ```
  openstack port show 135515bf-6cdf-45d7-affa-c775d2a43ce1 -f value -c 
allowed_address_pairs
  [{'mac_address': 'fa:16:3e:1e:c4:f1', 'ip_address': '192.168.0.0/16'}]
  ```

  I could not find definitive information on wether this is an allowed
  value for allowed_address_pairs, but at least the openstack/magnum
  project makes use of this.

  Once the above is set neutron-l3-agent logs errors shown in
  http://paste.openstack.org/show/807237/ and connection to all
  resources behind the router stop.

  Steps to reproduce:
  Set up openstack environment with neutron build from git branch stable/train 
with OVS, DVR and router HA in a multinode deployment on ubuntu bionic.

  Create a test environment:
  openstack network create test
  openstack subnet create --network test --subnet-range 10.0.0.0/24 test
  openstack router create --ha --distributed test
  openstack router set --external-gateway  test
  openstack router add subnet test test
  openstack server create --image  --flavor m1.small 
--security-group  --network test test
  openstack security group create icmp
  openstack security group rule create --protocol icmp --ingress icmp
  openstack server add security group test icmp
  openstack floating ip create 
  openstack server add floating ip test 
  ping 
  openstack port set --allowed-address ip-address=192.168.0.0/16 
  ping 

  Observe loss of ping after setting allowed_address_pairs.
  Revert https://review.opendev.org/c/openstack/neutron/+/792791 and redeploy 
neutron
  ping 
  Observe reestablishment of the connection.

  Please let me know if you need any other information


  +

  SRU:

  [Impact]
  VM with floating ip are unreachable from external

  [Test Case]
  Create a test environment on bionic ussuri
  openstack network create test
  openstack subnet create --network test --subnet-range 10.0.0.0/24 test
  openstack router create --ha --distributed test
  openstack router set --external-gateway  test
  openstack router add subnet test test
  openstack server create --image  --flavor m1.small 
--security-group  --network test test
  openstack security group create icmp
  openstack security group rule create --protocol icmp --ingress icmp
  openstack server add security group test icmp
  openstack floating ip create 
  openstack server add floating ip test 
  ping 
  openstack port set --allowed-address ip-address=192.168.0.0/16 
  openstack router set --disable 
  openstack router set --enable 
  ping 

  # ping should be successful after router is enabled.

  [Regression Potential]
  The only possibilities for allowed_address_pair are either IP or a CIDR. 
There is no chance of garbage values since it is verified during port update 
with allowed_address_pair. The edge case of IP with CIDR notation like /32 are 
already covered in common_utils.is_cidr_host() function call. All the upstream 
CI bu

[Yahoo-eng-team] [Bug 1774249] Re: update_available_resource will raise DiskNotFound after resize but before confirm

2021-10-20 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 2:17.0.13-0ubuntu4

---
nova (2:17.0.13-0ubuntu4) bionic; urgency=medium

  * d/p/libvirt-Ignore-DiskNotFound-during-update_available.patch: Ignore
DiskNotFound during update_available_resource (LP: #1774249).

 -- Alin-Gabriel Serdean   Tue, 21 Sep 2021
18:29:56 +

** Changed in: nova (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1774249

Title:
  update_available_resource will raise DiskNotFound after resize but
  before confirm

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Triaged
Status in OpenStack Compute (nova) pike series:
  Fix Committed
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Released
Status in OpenStack Compute (nova) train series:
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in nova source package in Bionic:
  Fix Released

Bug description:
  Original reported in RH Bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=1584315

  Tested on OSP12 (Pike), but appears to be still present on master.
  Should only occur if nova compute is configured to use local file
  instance storage.

  Create instance A on compute X

  Resize instance A to compute Y
    Domain is powered off
    /var/lib/nova/instances/ renamed to _resize on X
    Domain is *not* undefined

  On compute X:
    update_available_resource runs as a periodic task
    First action is to update self
    rt calls driver.get_available_resource()
    ...calls _get_disk_over_committed_size_total
    ...iterates over all defined domains, including the ones whose disks we 
renamed
    ...fails because a referenced disk no longer exists

  Results in errors in nova-compute.log:

  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager 
[req-bd52371f-c6ec-4a83-9584-c00c5377acd8 - - - - -] Error updating resources 
for node compute-0.localdomain.: DiskNotFound: No disk at 
/var/lib/nova/instances/f3ed9015-3984-43f4-b4a5-c2898052b47d/disk
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager Traceback (most 
recent call last):
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6695, in 
update_available_resource_for_node
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager 
rt.update_available_resource(context, nodename)
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 641, 
in update_available_resource
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(nodename)
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5892, in 
get_available_resource
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager 
disk_over_committed = self._get_disk_over_committed_size_total()
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7393, in 
_get_disk_over_committed_size_total
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager config, 
block_device_info)
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7301, in 
_get_instance_disk_info_from_config
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager dk_size = 
disk_api.get_allocated_disk_size(path)
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/disk/api.py", line 156, in 
get_allocated_disk_size
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager return 
images.qemu_img_info(path).disk_size
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/images.py", line 57, in 
qemu_img_info
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager raise 
exception.DiskNotFound(location=path)
  2018-05-30 02:17:08.647 1 ERROR nova.compute.manager DiskNotFound: No 
disk at /var/lib/nova/instances/f3ed9015-3984-43f4-b4a5-c2898052b47d/disk

  And resource tracker is no longer updated. We can find lots of these
  in the gate.

  Note that change Icec2769bf42455853cbe686fb30fda73df791b25 nearly
  mitigates this, but doesn't because task_state is not set while the
  instance is awaiting confirm.

  
=
 
  [Impact] 

  See

[Yahoo-eng-team] [Bug 1940555] Re: Compute Component: Error: (pymysql.err.ProgrammingError) (1146, "Table 'nova_api.cell_mappings' doesn't exist")

2021-10-20 Thread Ronelle Landy
no trace in skiplist - closing out the tripleo branch

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1940555

Title:
  Compute Component: Error: (pymysql.err.ProgrammingError) (1146, "Table
  'nova_api.cell_mappings' doesn't exist")

Status in OpenStack Compute (nova):
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  https://logserver.rdoproject.org/openstack-component-
  compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-compute-
  
master/7dac4e0/logs/undercloud/var/log/extra/podman/containers/nova_db_sync/stdout.log.txt.gz

  Is [api_database]/connection set in nova.conf?
  Is the cell0 database connection URL correct?
  Error: (pymysql.err.ProgrammingError) (1146, "Table 'nova_api.cell_mappings' 
doesn't exist")
  [SQL: SELECT cell_mappings.created_at AS cell_mappings_created_at, 
cell_mappings.updated_at AS cell_mappings_updated_at, cell_mappings.id AS 
cell_mappings_id, cell_mappings.uuid AS cell_mappings_uuid, cell_mappings.name 
AS cell_mappings_name, cell_mappings.transport_url AS 
cell_mappings_transport_url, cell_mappings.database_connection AS 
cell_mappings_database_connection, cell_mappings.disabled AS 
cell_mappings_disabled 
  FROM cell_mappings 
  WHERE cell_mappings.uuid = %(uuid_1)s 
   LIMIT %(param_1)s]
  [parameters: {'uuid_1': '----', 'param_1': 1}]
  (Background on this error at: http://sqlalche.me/e/14/f405)

  
  
https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-compute-master/7dac4e0/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz

  + echo 'Running command: '\''/usr/bin/bootstrap_host_exec nova_conductor su 
nova -s /bin/bash -c '\''/usr/bin/nova-manage db sync '\'''\'''
  + exec /usr/bin/bootstrap_host_exec nova_conductor su nova -s /bin/bash -c 
''\''/usr/bin/nova-manage' db sync \'
  2021-08-19 08:17:33.982762 | fa163e06-c6d2-5dfd-0459-197e |  
FATAL | Create containers managed by Podman for 
/var/lib/tripleo-config/container-startup-config/step_3 | standalone | 
error={"changed": false, "msg": "Failed containers: nova_api_db_sync, 
nova_api_map_cell0, nova_api_ensure_default_cell, nova_db_sync"}
  2021-08-19 08:17:33.983320 | fa163e06-c6d2-5dfd-0459-197e | 
TIMING | tripleo_container_manage : Create containers managed by Podman for 
/var/lib/tripleo-config/container-startup-config/step_3 | standalone | 
0:19:23.159835 | 41.20s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1940555/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1909008] Re: tempest.lib.exceptions.PreconditionFailed: Precondition Failed on standalone-full-tempest-api-master

2021-10-20 Thread Ronelle Landy
Fixed in neutron - no trace in skiplist - closing out tripleo branch

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1909008

Title:
  tempest.lib.exceptions.PreconditionFailed: Precondition Failed on
  standalone-full-tempest-api-master

Status in neutron:
  New
Status in tripleo:
  Fix Released

Bug description:
  Error logs:

  ft1.7: 
neutron_tempest_plugin.api.test_revisions.TestRevisions.test_update_network_constrained_by_revision[id-4a26a4be-9c53-483c-bc50-b111]testtools.testresult.real._StringException:
 pythonlogging:'': {{{
  2020-12-15 19:33:06,577 211730 INFO [tempest.lib.common.rest_client] 
Request (TestRevisions:test_update_network_constrained_by_revision): 201 POST 
http://192.168.24.3:9696/v2.0/networks 0.749s
  2020-12-15 19:33:06,577 211730 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: {"network": {"name": "tempest-test-network--2021099203"}}
  Response - Headers: {'content-type': 'application/json', 
'content-length': '626', 'x-openstack-request-id': 
'req-1548254b-2055-4bad-a701-967ac66d821e', 'date': 'Tue, 15 Dec 2020 19:33:06 
GMT', 'connection': 'close', 'status': '201', 'content-location': 
'http://192.168.24.3:9696/v2.0/networks'}
  Body: 
b'{"network":{"id":"cef0d923-c7a0-44e3-a796-d17fd6b6d895","name":"tempest-test-network--2021099203","tenant_id":"2e6b43d3504c424a9740166604033168","admin_state_up":true,"mtu":1442,"status":"ACTIVE","subnets":[],"shared":false,"project_id":"2e6b43d3504c424a9740166604033168","qos_policy_id":null,"port_security_enabled":true,"dns_domain":"","router:external":false,"availability_zone_hints":[],"is_default":false,"availability_zones":[],"ipv4_address_scope":null,"ipv6_address_scope":null,"description":"","l2_adjacency":true,"tags":[],"created_at":"2020-12-15T19:33:05Z","updated_at":"2020-12-15T19:33:05Z","revision_number":1}}'
  2020-12-15 19:33:06,893 211730 INFO [tempest.lib.common.rest_client] 
Request (TestRevisions:test_update_network_constrained_by_revision): 412 PUT 
http://192.168.24.3:9696/v2.0/networks/cef0d923-c7a0-44e3-a796-d17fd6b6d895 
0.314s
  2020-12-15 19:33:06,893 211730 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'If-Match': 'revision_number=0', 'X-Auth-Token': 
''}
  Body: {"network": {"name": "newnet"}}
  Response - Headers: {'content-length': '132', 'content-type': 
'application/json', 'x-openstack-request-id': 
'req-6892d0e6-7425-4eb3-ad04-c6791f805047', 'date': 'Tue, 15 Dec 2020 19:33:06 
GMT', 'connection': 'close', 'status': '412', 'content-location': 
'http://192.168.24.3:9696/v2.0/networks/cef0d923-c7a0-44e3-a796-d17fd6b6d895'}
  Body: b'{"NeutronError": {"type": "RevisionNumberConstraintFailed", 
"message": "Constrained to 0, but current revision is 1", "detail": ""}}'
  2020-12-15 19:33:07,060 211730 INFO [tempest.lib.common.rest_client] 
Request (TestRevisions:test_update_network_constrained_by_revision): 200 GET 
http://192.168.24.3:9696/v2.0/networks/cef0d923-c7a0-44e3-a796-d17fd6b6d895 
0.166s
  2020-12-15 19:33:07,060 211730 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-type': 'application/json', 
'content-length': '607', 'x-openstack-request-id': 
'req-15563484-9fca-4cd0-9252-effc646c70f1', 'date': 'Tue, 15 Dec 2020 19:33:07 
GMT', 'connection': 'close', 'status': '200', 'content-location': 
'http://192.168.24.3:9696/v2.0/networks/cef0d923-c7a0-44e3-a796-d17fd6b6d895'}
  Body: 
b'{"network":{"id":"cef0d923-c7a0-44e3-a796-d17fd6b6d895","name":"tempest-test-network--2021099203","tenant_id":"2e6b43d3504c424a9740166604033168","admin_state_up":true,"mtu":1442,"status":"ACTIVE","subnets":[],"shared":false,"availability_zone_hints":[],"availability_zones":[],"ipv4_address_scope":null,"ipv6_address_scope":null,"router:external":false,"description":"","qos_policy_id":null,"port_security_enabled":true,"dns_domain":"","l2_adjacency":true,"tags":[],"created_at":"2020-12-15T19:33:05Z","updated_at":"2020-12-15T19:33:05Z","revision_number":1,"project_id":"2e6b43d3504c424a9740166604033168"}}'
  2020-12-15 19:33:07,520 211730 INFO [tempest.lib.common.rest_client] 
Request (TestRevisions:test_update_network_constrained_by_revision): 412 PUT 
http://192.168.24.3:9696/v2.0/networks/cef0d923-c7a0-44e3-a796-d17fd6b6d895 
0.459s
  2020-12-15 19:33:07,521 211730 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'If-Match': 'revision_number=1', 'X-Auth-Token': 
''}
  Body: {"network": {"name": "newnet"}}
  Response - Headers: {'content-type': 'application/json', 
'content-length': '132', '

[Yahoo-eng-team] [Bug 1943708] Re: neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle fails with Port already has an attached device

2021-10-20 Thread Ronelle Landy
entry no longer in the skiplist - closing out tripleo bug

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1943708

Title:
  
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle
  fails with  Port already has an attached device

Status in neutron:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle
  is failing in periodic-tripleo-ci-centos-8-standalone-full-tempest-
  scenario-master with

  Response - Headers: {'date': 'Mon, 13 Sep 2021 18:30:12 GMT', 'server': 
'Apache', 'content-length': '1695', 'openstack-api-version': 'compute 2.1', 
'x-openstack-nova-api-version': '2.1', 'vary': 
'OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding', 
'x-openstack-request-id': 'req-cbbe0384-f683-4bd9-990a-bbaceff70255', 
'x-compute-request-id': 'req-cbbe0384-f683-4bd9-990a-bbaceff70255', 
'connection': 'close', 'content-type': 'application/json', 'status': '200', 
'content-location': 
'http://192.168.24.3:8774/v2.1/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf'}
  Body: b'{"server": {"id": "9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf", 
"name": "tempest-server-test-2120448491", "status": "ACTIVE", "tenant_id": 
"f907072324a04823b5267ebfd078f139", "user_id": 
"81a2f96324174768a1aa435f2856272c", "metadata": {}, "hostId": 
"985e26b32ef2005c617cddf445634feda09e1eb51abe1b10032b9f9b", "image": {"id": 
"9add59d5-6458-46fc-b806-ad3b39a7ebfe", "links": [{"rel": "bookmark", "href": 
"http://192.168.24.3:8774/images/9add59d5-6458-46fc-b806-ad3b39a7ebfe"}]}, 
"flavor": {"id": "48b6ea74-8aeb-4086-99ac-c4a4d18398f6", "links": [{"rel": 
"bookmark", "href": 
"http://192.168.24.3:8774/flavors/48b6ea74-8aeb-4086-99ac-c4a4d18398f6"}]}, 
"created": "2021-09-13T18:27:35Z", "updated": "2021-09-13T18:30:11Z", 
"addresses": {"tempest-TrunkTest-398369782": [{"version": 4, "addr": 
"10.100.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": 
"fa:16:3e:4f:c4:11"}, {"version": 4, "addr": "192.168.24.162", 
"OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": 
"fa:16:3e:4f:c4:11"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": 
"self", "href": 
"http://192.168.24.3:8774/v2.1/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf"}, 
{"rel": "bookmark", "href": 
"http://192.168.24.3:8774/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf"}], 
"OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": 
"nova", "config_drive": "True", "key_name": "tempest-TrunkTest-398369782", 
"OS-SRV-USG:launched_at": "2021-09-13T18:27:41.00", 
"OS-SRV-USG:terminated_at": null, "security_groups": [{"name": 
"tempest-TrunkTest-398369782"}], "OS-EXT-STS:task_state": "deleting", 
"OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, 
"os-extended-volumes:volumes_attached": []}}'
  2021-09-13 18:30:14,588 234586 INFO [tempest.lib.common.rest_client] 
Request (TrunkTest:_run_cleanups): 404 GET 
http://192.168.24.3:8774/v2.1/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf 
0.046s
  2021-09-13 18:30:14,589 234586 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'date': 'Mon, 13 Sep 2021 18:30:14 GMT', 'server': 
'Apache', 'content-length': '111', 'openstack-api-version': 'compute 2.1', 
'x-openstack-nova-api-version': '2.1', 'vary': 
'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 
'req-28741ac7-529e-4f49-a12f-3e75e14e2a0e', 'x-compute-request-id': 
'req-28741ac7-529e-4f49-a12f-3e75e14e2a0e', 'connection': 'close', 
'content-type': 'application/json; charset=UTF-8', 'status': '404', 
'content-location': 
'http://192.168.24.3:8774/v2.1/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf'}
  Body: b'{"itemNotFound": {"code": 404, "message": "Instance 
9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf could not be found."}}'
  }}}

  Traceback (most recent call last):
    File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_trunk.py",
 line 266, in test_trunk_subport_lifecycle
  self.client.add_subports(vm2.trunk['id'], subports)
    File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/services/network/json/network_client.py",
 line 848, in add_subports
  return self._subports_action('add_subports', trunk_id, subports)
    File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/services/network/json/network_client.py",
 line 842, in _subports_action
  resp, body = self.put(uri, jsonutils.dumps({'sub_ports': subports}))
    File "/usr/lib/python3.6/site-packages/tempest/lib/common/rest_client.py", 
line 363, in put
  return self.request('PUT', url, extra_headers, headers, body, chunked)
    File "/

[Yahoo-eng-team] [Bug 1947547] Re: Sporadic metadata issues when creating OVN migration workload

2021-10-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/814357
Committed: 
https://opendev.org/openstack/neutron/commit/d49ce1652d31fb884285ed30e39ec10ef40c864d
Submitter: "Zuul (22348)"
Branch:master

commit d49ce1652d31fb884285ed30e39ec10ef40c864d
Author: Roman Safronov 
Date:   Mon Oct 18 09:23:08 2021 +0300

Fix OVN migration workload creation order

Currently workload VMs start before subnet is connected to router.
When DVR is enabled this causes sometimes that one of the VMs is not
able to get metadata.

Closes bug: #1947547

Change-Id: Ifd686d7ff452abd1226fbbc97f499e05102e4596


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1947547

Title:
  Sporadic metadata issues when creating OVN migration workload

Status in neutron:
  Fix Released

Bug description:
  When starting OVN migration workload, some of the workload VMs (usually only 
one of them) is not accessible via ssh. As can be seen in console logs the VM 
is not able to retrieve metadata.
  This is basically due to bug https://bugs.launchpad.net/neutron/+bug/1813787 
- nova
  starts vm faster then local dvr router is ready thus during the vm's boot
  process when vm asks for metadata, there is (yet) no haproxy ready to process
  those requests.
  If I create router and attach subnet before starting VMs the issue does not 
happen. So it seems like the order of the workload creation should be changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1947547/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1947870] [NEW] Keystone Kerberos auth broken when delegate to HTTP

2021-10-20 Thread Sacha Pateyron
Public bug reported:

Keystone Kerberos works well when you openstack client
can dialog with yours KDC.

However when KDC is hidden, it's not accessible by our
users directly so we need to delegate the auth Kerberos
to HTTP to get Keystone token, that's why we use curl command.

>From the Openstack client cli we get "Negotiate"
as auth_type -> it's works. Nonetheless with curl we get "Basic"
as auth_type -> raised error.

That's why we proposed to add "Basic" as authorized method for Kerberos.


https://review.opendev.org/c/openstack/keystone/+/814770

Patchset: 1efc0c5c6730c9066f47edf953bf805aec0fd3c0

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: http kerberos keystone negotiate train

** Tags added: kerberos keystone train

** Tags added: http negotiate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1947870

Title:
  Keystone Kerberos auth broken when delegate to HTTP

Status in OpenStack Identity (keystone):
  New

Bug description:
  Keystone Kerberos works well when you openstack client
  can dialog with yours KDC.
  
  However when KDC is hidden, it's not accessible by our
  users directly so we need to delegate the auth Kerberos
  to HTTP to get Keystone token, that's why we use curl command.
  
  From the Openstack client cli we get "Negotiate"
  as auth_type -> it's works. Nonetheless with curl we get "Basic"
  as auth_type -> raised error.
  
  That's why we proposed to add "Basic" as authorized method for Kerberos.

  
  https://review.opendev.org/c/openstack/keystone/+/814770

  Patchset: 1efc0c5c6730c9066f47edf953bf805aec0fd3c0

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1947870/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1947847] [NEW] Horizon selenium-headless/integration-tests job start failing after updating selenium version to4.0.0

2021-10-20 Thread Vishal Manchanda
Public bug reported:

After Updating the selenium version to 4.0.0 from 3.141.0 horizon
selenium-headless and integration tests start failing. More Error logs
can be found here https://paste.opendev.org/show/810092/

** Affects: horizon
 Importance: High
 Status: New

** Changed in: horizon
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1947847

Title:
  Horizon selenium-headless/integration-tests job start failing after
  updating  selenium version to4.0.0

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After Updating the selenium version to 4.0.0 from 3.141.0 horizon
  selenium-headless and integration tests start failing. More Error logs
  can be found here https://paste.opendev.org/show/810092/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1947847/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936972] Re: MAAS deploys fail if host has NIC w/ random MAC

2021-10-20 Thread Björn Tillenius
I don't think this is a feature request. Ignoring the NIC in MAAS, might
be reasonable. Although it's odd that the NIC doesn't have a MAC of its
own. Is that a hardware feature, or is it the driver that doesn't
surface the physical MAC?

Also, could you please provide the current output from the machine-
resources resources binary for that machine?

** Changed in: maas
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1936972

Title:
  MAAS deploys fail if host has NIC w/ random MAC

Status in cloud-init:
  Incomplete
Status in curtin:
  New
Status in MAAS:
  Incomplete

Bug description:
  The Nvidia DGX A100 server includes a USB Redfish Host Interface NIC.
  This NIC apparently provides no MAC address of it's own, so the driver
  generates a random MAC for it:

  ./drivers/net/usb/cdc_ether.c:

  static int usbnet_cdc_zte_bind(struct usbnet *dev, struct usb_interface *intf)
  {
  int status = usbnet_cdc_bind(dev, intf);

  if (!status && (dev->net->dev_addr[0] & 0x02))
  eth_hw_addr_random(dev->net);

  return status;
  }

  This causes a problem with MAAS because, during deployment, MAAS sees
  this as a normal NIC and records the MAC. The post-install reboot then
  fails:

  [   43.652573] cloud-init[3761]: init.apply_network_config(bring_up=not 
args.local)
  [   43.700516] cloud-init[3761]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 735, in 
apply_network_config
  [   43.724496] cloud-init[3761]: 
self.distro.networking.wait_for_physdevs(netcfg)
  [   43.740509] cloud-init[3761]:   File 
"/usr/lib/python3/dist-packages/cloudinit/distros/networking.py", line 177, in 
wait_for_physdevs
  [   43.764523] cloud-init[3761]: raise RuntimeError(msg)
  [   43.780511] cloud-init[3761]: RuntimeError: Not all expected physical 
devices present: {'fe:b8:63:69:9f:71'}

  I'm not sure what the best answer for MAAS is here, but here's some
  thoughts:

  1) Ignore all Redfish system interfaces. These are a connect between the host 
and the BMC, so they don't really have a use-case in the MAAS model AFAICT. 
These devices can be identified using the SMBIOS as described in the Redfish 
Host Interface Specification, section 8:

https://www.dmtf.org/sites/default/files/standards/documents/DSP0270_1.3.0.pdf
  Which can be read from within Linux using dmidecode.

  2) Ignore (or specially handle) all NICs with randomly generated MAC
  addresses. While this is the only time I've seen the random MAC with
  production server hardware, it is something I've seen on e.g. ARM
  development boards. Problem is, I don't know how to detect a generated
  MAC. I'd hoped the permanent MAC (ethtool -P) MAC would be NULL, but
  it seems to also be set to the generated MAC :(

  fyi, 2 workarounds for this that seem to work:
   1) Delete the NIC from the MAAS model in the MAAS UI after every 
commissioning.
   2) Use a tag's kernel_opts field to modprobe.blacklist the driver used for 
the Redfish NIC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1936972/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp