[Yahoo-eng-team] [Bug 1999762] Re: Login failure after installing cloud-Init and reboot on an ARM machine

2023-03-28 Thread Launchpad Bug Tracker
[Expired for cloud-init because there has been no activity for 60 days.]

** Changed in: cloud-init
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1999762

Title:
  Login failure after installing cloud-Init and reboot on an ARM machine

Status in cloud-init:
  Expired

Bug description:
  As mentioned in the title, I just installed cloud-Init on an ARM
  machine with a newly installed OS, then reboot. Then the account can't
  be logged in. I used the root account.

  I also tested it on an x86 machine and the results were normal.

  This problem seems to be related to the following commit:
  
https://github.com/canonical/cloud-init/commit/7f85a3a5b4586ac7f21309aac4edc39e6ffea9ef

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1999762/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2013123] [NEW] nova blocks discard with virtio-blk

2023-03-28 Thread sean mooney
Public bug reported:

nova added support for discard aka trim in mitaka when a cinder backend
reports it's supported. At the time Qemu did not support trim/discard
when using virtio-blk. This gap was adressed in qemu 4.0.0 however we never 
adapted nova to allow
trim when using virtio-blk. nova raised its min qemu version to 4.0.0 in Ussuri.
As such this should be supported in all nova deployment from Ussuri on but it 
is still blocked on master today.

** Affects: nova
 Importance: Undecided
 Assignee: sean mooney (sean-k-mooney)
 Status: New


** Tags: cinder libvirt volumes

** Changed in: nova
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2013123

Title:
  nova blocks discard with virtio-blk

Status in OpenStack Compute (nova):
  New

Bug description:
  nova added support for discard aka trim in mitaka when a cinder backend
  reports it's supported. At the time Qemu did not support trim/discard
  when using virtio-blk. This gap was adressed in qemu 4.0.0 however we never 
adapted nova to allow
  trim when using virtio-blk. nova raised its min qemu version to 4.0.0 in 
Ussuri.
  As such this should be supported in all nova deployment from Ussuri on but it 
is still blocked on master today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2013123/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1755205] Re: ValueError: Field value 21 is invalid

2023-03-28 Thread Michal Nasiadka
** Changed in: kolla-ansible
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1755205

Title:
   ValueError: Field value 21 is invalid

Status in kolla-ansible:
  Invalid
Status in neutron:
  Invalid

Bug description:
  we just upgraded to Pike from ocata and a new error is now seen in the
  log files. We have not done any config changes, just upgraded the
  containers

  We are running kolla-ansible

  neutron-server.log

  2018-03-12 16:13:09.298 53 DEBUG neutron_lib.callbacks.manager 
[req-8351b200-f441-425d-87a9-a29dbe01a729 - - - - -] Notify callbacks 
['neutron.services.segments.plugin.NovaSegmentNotifier._notify_host_addition_to_aggregate-16251827']
 for segment_host_mapping, after_create _notify_loop 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:167
  2018-03-12 16:13:09.335 53 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-59bf1c54-b85b-4380-b08c-061c0cb242a2" acquired by 
"neutron.notifiers.batch_notifier.synced_send" :: waited 0.000s inner 
/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server 
[req-cf93a4c0-9462-41e5-9922-b9b55ef6d1e2 - - - - -] Exception during message 
handling: ValueError: Field value 21 is invalid
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", 
line 160, in _process_incoming
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 213, in dispatch
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 183, in _do_dispatch
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", 
line 232, in inner
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py",
 line 143, in bulk_pull
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server 
**filter_kwargs)]
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/objects/base.py", line 
468, in get_objects
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server return 
[cls._load_object(context, db_obj) for db_obj in db_objs]
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/objects/base.py", line 
403, in _load_object
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server 
obj.from_db_object(db_obj)
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/objects/base.py", line 
346, in from_db_object
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server setattr(self, 
field, fields[field])
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 72, in setter
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server field_value = 
field.coerce(self, name, value)
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/fields.py",
 line 195, in coerce
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server return 
self._type.coerce(obj, attr, value)
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/fields.py",
 line 317, in coerce
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server raise 
ValueError(msg)
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server ValueError: Field 
value 21 is invalid
  2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server
  2018-03-12 16:13:11.336 53 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-59bf1c54-b85b-4380-b08c-061c0cb242a2" released by 
"neutron.notifiers.batch_notifier.synced_send" :: held 2.002s inner 
/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_concurre

[Yahoo-eng-team] [Bug 1792493] Re: DVR and floating IPs broken in latest 7.0.0.0rc1?

2023-03-28 Thread Michal Nasiadka
** Changed in: kolla-ansible
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1792493

Title:
  DVR and floating IPs broken in latest 7.0.0.0rc1?

Status in kolla-ansible:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  Kolla-Ansible 7.0.0.0rc1 with binary image build (since the source
  option is failing to build ceilometer images currently) on CentOS 7.5
  (latest updates)

  What worked previously does not appear to work anymore.  I'm not sure
  if this is due to an update in CentOS 7.5 or OVS or other at this
  stage, but compute nodes are no longer ARP replying to ARP requests
  for who has the floating IP.

  For testing, I looked for the IP assigned to the FIP namespace's fg
  interface (in my case, fg-ba492724-bd).  This appears to be an IP on
  the ext-net network, but is not the floating IP assigned to a VM.
  Let's call this A.A.A.A and the floating IP B.B.B.B.

  I can tcpdump traffic on the physical port of the compute node and see
  the ARP requests for both A.A.A.A and B.B.B.B with respective pings
  from the Internet, but no ARP replies.

  I have attached a diagram showing, what I believe to be, the correct
  path for the packets.

  There appears to be something broken between my two arrows.

  Since tcpdump is not installed in the openvswitch_vswitchd container,
  nor is ovs-tcpdump, I can't figure out how to mirror and sniff ports
  on the br-ex and br-int bridges, at least in a containerized instance
  of OVS.  If anyone knows a way to do this, I would really appreciate
  the help.

  I haven't found any issues in the OVS configuration (ovs-vsctl show) -
  which matches the attached diagram.

  Has anyone else had issues?

  OVS returns this version info:
  ovs-vsctl (Open vSwitch) 2.9.0
  DB Schema 7.15.1

  in case it helps.

  Eric

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1792493/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951296] Re: OVN db sync script fails with OVN schema that has label column in ACL table

2023-03-28 Thread Edward Hope-Morley
** Changed in: cloud-archive/xena
   Status: New => Fix Released

** Changed in: cloud-archive/wallaby
   Status: New => Fix Released

** Changed in: cloud-archive/victoria
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951296

Title:
  OVN db sync script fails with OVN schema that has label column in ACL
  table

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  Fix Released
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in Ubuntu Cloud Archive xena series:
  Fix Released
Status in Ubuntu Cloud Archive yoga series:
  Fix Released
Status in Ubuntu Cloud Archive zed series:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  OVN introduced a new column in ACL table. The column name is label and
  when running db-sync script, we compare ACL generated by the ovn mech
  driver from Neutron DB with the actual ACLs in the OVN DB. Because of
  the new label column, everything seems like a new ACL because the
  column differs to what Neutron generated. Thus the script attempts to
  create a new ACL that already exists.

  b'Traceback (most recent call last):'
  b'  File "/usr/local/lib/python3.6/site-packages/neutron/tests/base.py", 
line 181, in func'
  b'return f(self, *args, **kwargs)'
  b'  File "/usr/local/lib/python3.6/site-packages/neutron/tests/base.py", 
line 181, in func'
  b'return f(self, *args, **kwargs)'
  b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1547, in test_ovn_nb_sync_repair'
  b"self._test_ovn_nb_sync_helper('repair')"
  b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1543, in _test_ovn_nb_sync_helper'
  b'self._sync_resources(mode)'
  b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1523, in _sync_resources'
  b'nb_synchronizer.do_sync()'
  b'  File "/home/cloud-user/networking-ovn/networking_ovn/ovn_db_sync.py", 
line 104, in do_sync'
  b'self.sync_acls(ctx)'
  b'  File "/home/cloud-user/networking-ovn/networking_ovn/ovn_db_sync.py", 
line 288, in sync_acls'
  b'txn.add(self.ovn_api.pg_acl_add(**acla))'
  b'  File "/usr/lib64/python3.6/contextlib.py", line 88, in __exit__'
  b'next(self.gen)'
  b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/ovsdb/impl_idl_ovn.py", line 
230, in transaction'
  b'yield t'
  b'  File "/usr/lib64/python3.6/contextlib.py", line 88, in __exit__'
  b'next(self.gen)'
  b'  File "/usr/local/lib/python3.6/site-packages/ovsdbapp/api.py", line 
110, in transaction'
  b'del self._nested_txns_map[cur_thread_id]'
  b'  File "/usr/local/lib/python3.6/site-packages/ovsdbapp/api.py", line 
61, in __exit__'
  b'self.result = self.commit()'
  b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 65, in commit'
  b'raise result.ex'
  b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 131, in run'
  b'txn.results.put(txn.do_commit())'
  b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 93, in do_commit'
  b'command.run_idl(txn)'
  b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/schema/ovn_northbound/commands.py",
 line 124, in run_idl'
  b'self.direction, self.priority, self.match))'
  b'RuntimeError: ACL (from-lport, 1001, inport == @neutron_pg_drop && ip) 
already exists'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1951296/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1889779] Re: Functional tests neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect failing on Ubuntu 20.04

2023-03-28 Thread Michal Nasiadka
** Changed in: kolla-ansible
   Status: New => Invalid

** Changed in: kolla-ansible/victoria
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1889779

Title:
  Functional tests
  neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect
  failing on Ubuntu 20.04

Status in kolla-ansible:
  Invalid
Status in kolla-ansible victoria series:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  We are going to switch our testing to the Ubuntu 20.04 and some
  functional tests from module
  neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect are
  failing there.

  Errors are like: http://paste.openstack.org/show/796490/

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1889779/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951296] Re: OVN db sync script fails with OVN schema that has label column in ACL table

2023-03-28 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/yoga
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/zed
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/zed
   Status: New => Fix Released

** Changed in: cloud-archive/yoga
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951296

Title:
  OVN db sync script fails with OVN schema that has label column in ACL
  table

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  New
Status in Ubuntu Cloud Archive xena series:
  New
Status in Ubuntu Cloud Archive yoga series:
  Fix Released
Status in Ubuntu Cloud Archive zed series:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  OVN introduced a new column in ACL table. The column name is label and
  when running db-sync script, we compare ACL generated by the ovn mech
  driver from Neutron DB with the actual ACLs in the OVN DB. Because of
  the new label column, everything seems like a new ACL because the
  column differs to what Neutron generated. Thus the script attempts to
  create a new ACL that already exists.

  b'Traceback (most recent call last):'
  b'  File "/usr/local/lib/python3.6/site-packages/neutron/tests/base.py", 
line 181, in func'
  b'return f(self, *args, **kwargs)'
  b'  File "/usr/local/lib/python3.6/site-packages/neutron/tests/base.py", 
line 181, in func'
  b'return f(self, *args, **kwargs)'
  b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1547, in test_ovn_nb_sync_repair'
  b"self._test_ovn_nb_sync_helper('repair')"
  b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1543, in _test_ovn_nb_sync_helper'
  b'self._sync_resources(mode)'
  b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1523, in _sync_resources'
  b'nb_synchronizer.do_sync()'
  b'  File "/home/cloud-user/networking-ovn/networking_ovn/ovn_db_sync.py", 
line 104, in do_sync'
  b'self.sync_acls(ctx)'
  b'  File "/home/cloud-user/networking-ovn/networking_ovn/ovn_db_sync.py", 
line 288, in sync_acls'
  b'txn.add(self.ovn_api.pg_acl_add(**acla))'
  b'  File "/usr/lib64/python3.6/contextlib.py", line 88, in __exit__'
  b'next(self.gen)'
  b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/ovsdb/impl_idl_ovn.py", line 
230, in transaction'
  b'yield t'
  b'  File "/usr/lib64/python3.6/contextlib.py", line 88, in __exit__'
  b'next(self.gen)'
  b'  File "/usr/local/lib/python3.6/site-packages/ovsdbapp/api.py", line 
110, in transaction'
  b'del self._nested_txns_map[cur_thread_id]'
  b'  File "/usr/local/lib/python3.6/site-packages/ovsdbapp/api.py", line 
61, in __exit__'
  b'self.result = self.commit()'
  b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 65, in commit'
  b'raise result.ex'
  b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 131, in run'
  b'txn.results.put(txn.do_commit())'
  b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 93, in do_commit'
  b'command.run_idl(txn)'
  b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/schema/ovn_northbound/commands.py",
 line 124, in run_idl'
  b'self.direction, self.priority, self.match))'
  b'RuntimeError: ACL (from-lport, 1001, inport == @neutron_pg_drop && ip) 
already exists'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1951296/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2011800] Re: ovn qos extension: update router does not remove no longer present qos rules

2023-03-28 Thread Rodolfo Alonso
Hello Frode:

Please check [1]. There was a little mistake in the test. I've tested
manually with this code and is working fine. I've also manually tested
adding, updating and removing QoS policies and rules from a router and I
see how the QoS registers, related to the GW port, are correctly
updated.

Regards.

[1]https://review.opendev.org/c/openstack/neutron/+/877603/7..8/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/extensions/test_qos.py

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2011800

Title:
  ovn qos extension: update router does not remove no longer present qos
  rules

Status in neutron:
  Invalid

Bug description:
  Let's say you set up both ingress and egress QoS rules for a project
  and then create a router. If you then subsequently remove one or both
  of the rules and update a router, these rules will not be removed from
  the OVN database.

  If you update QoS rules for a project and update the router, those
  rules are also updated, so that part works as expected.

  The expected outcome is that if one or both of the rules are removed,
  a call to update router should remove those rules from the OVN
  database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2011800/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2013045] [NEW] CI: MacvtapAgentTestCase

2023-03-28 Thread Sahid Orentino
Public bug reported:

ft1.1: 
neutron.tests.functional.plugins.ml2.drivers.macvtap.agent.test_macvtap_neutron_agent.MacvtapAgentTestCase.test_get_all_devicestesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/macvtap/agent/test_macvtap_neutron_agent.py",
 line 47, in test_get_all_devices
self.assertEqual(set([macvtap.link.address]),
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: {'3a:83:7e:60:34:b6'} != 
{'66:81:56:14:7d:0d'}

https://zuul.opendev.org/t/openstack/build/235c115c538f4f84b839f15b628339b6/logs

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2013045

Title:
  CI: MacvtapAgentTestCase

Status in neutron:
  New

Bug description:
  ft1.1: 
neutron.tests.functional.plugins.ml2.drivers.macvtap.agent.test_macvtap_neutron_agent.MacvtapAgentTestCase.test_get_all_devicestesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/macvtap/agent/test_macvtap_neutron_agent.py",
 line 47, in test_get_all_devices
  self.assertEqual(set([macvtap.link.address]),
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: {'3a:83:7e:60:34:b6'} != 
{'66:81:56:14:7d:0d'}

  
https://zuul.opendev.org/t/openstack/build/235c115c538f4f84b839f15b628339b6/logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2013045/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp