[Yahoo-eng-team] [Bug 1928164] [NEW] [OVN] Ovn-controller dose not update the flows table when localport tap device is rebuilt

2021-05-11 Thread hh
Public bug reported:

After all vm using network A on HV are deleted, the tap device of the network A 
is also deleted.
After the vm is re-created on this HV using the network A, the tap device will 
be rebuilt.
At this point, the flows table has not been updated and the vm's traffic cannot 
reach localport. Restore by restarting ovn-controller.

ovn version as:
# ovn-controller --version
ovn-controller 21.03.0
Open vSwitch Library 2.15.90
OpenFlow versions 0x6:0x6
SB DB Schema 20.16.1

Trace by ovn-trace is normal:

()[root@ovn-ovsdb-sb-1 /]# ovn-trace --summary 
5f79485f-682c-434a-8202-f6658fa30076 'inport == 
"643e3bc7-0b44-4929-8c4d-ec63f19097f8" && eth.src == fa:16:3e:55:4a:8f && 
ip4.src == 192.168.222.168 &&  eth.dst == fa:16:3e:4a:d6:bc &&  ip4.dst == 
169.254.169.254 && ip.ttl == 32'
# 
ip,reg14=0x16,vlan_tci=0x,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=32
ingress(dp="jufeng", inport="instance-h0NdYw_jufeng_3fe45e35") {
next;
next;
reg0[0] = 1;
next;
ct_next;
ct_next(ct_state=est|trk /* default (use --ct to customize) */) {
reg0[8] = 1;
reg0[10] = 1;
next;
next;
outport = "8b01e3";
output;
egress(dp="jufeng", inport="instance-h0NdYw_jufeng_3fe45e35", 
outport="8b01e3") {
reg0[0] = 1;
next;
ct_next;
ct_next(ct_state=est|trk /* default (use --ct to customize) */) {
reg0[8] = 1;
reg0[10] = 1;
next;
output;
/* output to "8b01e3", type "localport" */;
};
};
};
};


Trace by flows is not normal:

# ovs-appctl ofproto/trace br-int 
in_port=33,tcp,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,tp_dst=80
Flow: 
tcp,in_port=33,vlan_tci=0x,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0

bridge("br-int")

 0. in_port=33, priority 100, cookie 0xebbdd9f7
set_field:0x19->reg13
set_field:0x5->reg11
set_field:0x3->reg12
set_field:0x3->metadata
set_field:0x16->reg14
resubmit(,8)
 8. reg14=0x16,metadata=0x3,dl_src=fa:16:3e:55:4a:8f, priority 50, cookie 
0x1a73fa10
resubmit(,9)
 9. ip,reg14=0x16,metadata=0x3,dl_src=fa:16:3e:55:4a:8f,nw_src=192.168.222.168, 
priority 90, cookie 0x2690070f
resubmit(,10)
10. metadata=0x3, priority 0, cookie 0x4f77990b
resubmit(,11)
11. metadata=0x3, priority 0, cookie 0xd7e42894
resubmit(,12)
12. metadata=0x3, priority 0, cookie 0xa5400341
resubmit(,13)
13. ip,metadata=0x3, priority 100, cookie 0x510177c2
set_field:0x1/0x1->xxreg0
resubmit(,14)
14. metadata=0x3, priority 0, cookie 0x5505c270
resubmit(,15)
15. ip,reg0=0x1/0x1,metadata=0x3, priority 100, cookie 0xf2eaa3a5
ct(table=16,zone=NXM_NX_REG13[0..15])
drop
 -> A clone of the packet is forked to recirculate. The forked pipeline 
will be resumed at table 16.
 -> Sets the packet to an untracked state, and clears all the conntrack 
fields.

Final flow: 
tcp,reg0=0x1,reg11=0x5,reg12=0x3,reg13=0x19,reg14=0x16,metadata=0x3,in_port=33,vlan_tci=0x,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
Megaflow: 
recirc_id=0,eth,tcp,in_port=33,vlan_tci=0x/0x1fff,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=128.0.0.0/2,nw_frag=no
Datapath actions: ct(zone=25),recirc(0xe8)

===
recirc(0xe8) - resume conntrack with default ct_state=trk|new (use --ct-next to 
customize)
===

Flow:
recirc_id=0xe8,ct_state=new|trk,ct_zone=25,eth,tcp,reg0=0x1,reg11=0x5,reg12=0x3,reg13=0x19,reg14=0x16,metadata=0x3,in_port=33,vlan_tci=0x,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0

bridge("br-int")

thaw
Resuming from table 16
16. ct_state=+new-est+trk,metadata=0x3, priority 7, cookie 0x6d37a2c
set_field:0x80/0x80->xxreg0

set_field:0x200/0x200->xxreg0
resubmit(,17)
17. ip,reg0=0x80/0x80,reg14=0x16,metadata=0x3, priority 2002, cookie 0x6cdd739a
set_field:0x2/0x2->xxreg0
resubmit(,18)
18. metadata=0x3, priority 0, cookie 0x20565915
resubmit(,19)
19. metadata=0x3, priority 0, cookie 0x5f4ccace
resubmit(,20)
20. metadata=0x3, priority 0, cookie 0x1172b12a
res

[Yahoo-eng-team] [Bug 1928007] Re: rbd_utils unit tests not running in gate

2021-05-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/790511
Committed: 
https://opendev.org/openstack/nova/commit/8b647f1b3f56879be221b3925570790a1e0e77f8
Submitter: "Zuul (22348)"
Branch:master

commit 8b647f1b3f56879be221b3925570790a1e0e77f8
Author: melanie witt 
Date:   Mon May 10 17:31:25 2021 +

rbd: Get rbd_utils unit tests running again

Awhile back, change I25baf5edd25d9e551686b7ed317a63fd778be533 moved
rbd_utils out from the libvirt driver and into a central location under
nova/storage. This move missed adding a __init__.py file under
nova/tests/unit/storage, so unit test discovery wasn't picking up the
rbd_utils tests and couldn't run them.

This adds a __init__.py file under nova/tests/unit/storage to get the
tests running again.

This also fixes a small bug introduced by change
I3032bbe6bd2d6acc9ba0f0cac4d00ed4b4464ceb in RbdTestCase.setUp() that
passed nonexistent self.images_rbd_pool to self.flags. It should be
self.rbd_pool.

Closes-Bug: #1928007

Change-Id: Ic03a5336abdced883f62f395690c0feac12075c8


** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova/wallaby
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1928007

Title:
  rbd_utils unit tests not running in gate

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) victoria series:
  New
Status in OpenStack Compute (nova) wallaby series:
  In Progress

Bug description:
  Awhile back, change:

  https://review.opendev.org/c/openstack/nova/+/746904

  moved rbd_utils out from the libvirt driver and into a central
  location under nova/storage. This move missed adding a __init__.py
  file under nova/tests/unit/storage, so unit test discovery wasn't
  picking up the rbd_utils tests and couldn't run them.

  There was also a small bug introduced by change:

  https://review.opendev.org/c/openstack/nova/+/574301

  in RbdTestCase.setUp() that passed nonexistent self.images_rbd_pool to
  self.flags. It should be self.rbd_pool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1928007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1927020] Re: cloudconfig not writing maas data source

2021-05-11 Thread Lee Trager
As per discourse it looks like the bug is in Curtin.
/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg is written then deleted by
Curtin.

Applying debconf selections
Running command ['unshare', '--fork', '--pid', '--', 'chroot', 
'/tmp/tmpihv32ogj/target', 'debconf-set-selections'] with allowed return codes 
[0] (capture=True)
Running command ['unshare', '--fork', '--pid', '--', 'chroot', 
'/tmp/tmpihv32ogj/target', 'dpkg-query', '--list'] with allowed return codes 
[0] (capture=True)
unconfiguring cloud-init
cleaning cloud-init config from: 
['/tmp/tmpihv32ogj/target/etc/cloud/cloud.cfg.d/90_dpkg_local_cloud_config.cfg',
 '/tmp/tmpihv32ogj/target/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg', 
'/tmp/tmpihv32ogj/target/etc/cloud/cloud.cfg.d/90_dpkg.cfg']

** Also affects: curtin
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1927020

Title:
  cloudconfig not writing maas data source

Status in cloud-init:
  Incomplete
Status in curtin:
  New

Bug description:
  further background https://discourse.maas.io/t/debian10-fails-on-
  final-reboot/4486

  I'm deploying a debian 10 buster image using MAAS. No errors are
  reported in MAAS or curtin install but on the final boot cloud-init
  reports `Failed to load metadata and userdata`. checking the config
  for the machine in maas shows all the cloudconfig: to connect to maas
  but the files with oauth credentials etc dont seem to have been copied
  to the target.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1927020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674846] Re: using glance v2 api does not remove temporary files

2021-05-11 Thread Jeremy Stanley
The indicated fix merged during the Ussuri development cycle, so in
theory this bug should be valid only for stable/train and older
branches. Given stable/train is scheduled to enter extended maintenance
phase tomorrow, there is no opportunity to backport the fix to it and
issue a point release at this stage. The fix could still be backported
under extended maintenance if someone is interested in working on that,
but there would be no point in issuing a security advisory for it
because it will never appear in a point release for that series. As
such, I'm marking our security advisory task won't fix to reflect this.

** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1674846

Title:
  using glance v2 api does not remove temporary files

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Currently if you are using Glance v2 with TemporaryUploadedFile
  (legacy mode?) the temporary file created on disk is never removed.
  This will eventually cause the machine to run out of tmp disk space.

  The issue is that if Glance v2 is used, the code never calls image_update 
which is responsible for deleting the temporary file.
  
https://github.com/openstack/horizon/blob/446e5aefb4354c9092d1cbc5ff258ee74558e769/openstack_dashboard/api/glance.py#L439
  
https://github.com/openstack/horizon/blob/446e5aefb4354c9092d1cbc5ff258ee74558e769/openstack_dashboard/api/glance.py#L349

  Either the function image_update should always be called, or if data
  is a TemporaryUploadedFile object, the call should always try to
  delete the temporary file once done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1674846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799588] Re: default paste_deploy.flavor is none, but config file text implies it is 'keystone' (was: non-admin users can see all tenants' images even when image is private)

2021-05-11 Thread Jeremy Stanley
Yes, since this bug is only valid for branches which are no longer in a
maintained state, there is little point in issuing an advisory.

** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1799588

Title:
  default paste_deploy.flavor is none, but config file text implies it
  is 'keystone' (was: non-admin users can see all tenants' images even
  when image is private)

Status in Glance:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  [root@vm013 glance]# cat /etc/redhat-release
  CentOS Linux release 7.5.1804 (Core)
  [root@vm013 glance]# rpm -qa |grep glance |sort
  openstack-glance-16.0.1-1.el7.noarch
  openstack-glance-doc-16.0.1-1.el7.noarch
  python2-glanceclient-2.10.0-1.el7.noarch
  python2-glance-store-0.23.0-1.el7.noarch
  python-glance-16.0.1-1.el7.noarch
  python-glanceclient-doc-2.10.0-1.el7.noarch
  [root@vm013 glance]# md5sum /etc/glance/policy.json
  a4f29d0f75bbc04f1d83a1abdf0fda6f  /etc/glance/policy.json

  I am running only Glance v2 API.

  In this demo, as an un-privileged user, I will list all glance images,
  from all tenants, and they are all marked 'private'.

  (as admin):
  [root@vm013 ~]# openstack role assignment list --effective --names |grep 
jonathan
  | user| jonathan@Default|   | ozoneaq@ndc| | 
False |

  (as jonathan):
  [root@vm013 ~]# . keystonerc_jonathan
  [root@vm013 ~]# printenv |grep OS_ |sort
  OS_AUTH_URL=https://keystone.gpcprod:5000/v3
  OS_CACERT=/etc/openldap/cacerts/gpcprod_root_ca.pem
  OS_IDENTITY_API_VERSION=3
  OS_PASSWORD=XX
  OS_PROJECT_DOMAIN_NAME=NDC
  OS_PROJECT_NAME=ozoneaq
  OS_USER_DOMAIN_NAME=Default
  OS_USERNAME=jonathan
  OS_VOLUME_API_VERSION=3

  [root@vm013 ~]# openstack image list
  
+--+---++
  | ID   | Name  | 
Status |
  
+--+---++
  | 0099a343-1376-49f4-85f9-795624fb2ce8 | CentOS-7-x86_64-GenericCloud-1808 | 
active |
  | 53d7c007-318b-4dad-b7cb-38b1dd31f884 | Ubuntu1604-180919 | 
active |
  | 482f52ca-e56c-4555-a0e3-93eb491db389 | Ubuntu1604-20181016   | 
active |
  | 212aaf3c-18f6-4327-8a11-c726c2e21780 | Ubuntu1804-20181016   | 
active |
  | 051d2fff-6b90-4321-9c64-c613f0ddf3da | Windows2016Std-20181003r4 | 
active |
  | ac6baa7c-fd2f-48e2-84e0-37a86f623e38 | Windows2016std-20181003r2 | 
active |
  | 2264c6b9-40e7-492d-a5bc-dd11a7b4ee10 | Windows2016std-20181004   | 
active |
  | 6d865748-ae7a-4c43-9d01-bc35c9002fd9 | Windows2016std-20181004r2 | 
active |
  | 26ba1766-aa67-4b1b-81cd-90dda8d41384 | WindowsServer2016-20180926| 
active |
  | 3fc3c155-c7a2-4556-a5d0-de7eff208d7d | WindowsStd2016-20181010   | 
active |
  | b6d161ca-e03b-46c5-95a0-5fe31723c5c7 | centos7-201810100 | 
active |
  | 8bdc33be-1eb5-429b-b0ca-682b24df45f0 | centos7-gi-build-test1| 
active |
  | 34a915b8-cca6-45c3-9348-5e15dace444f | cirros| 
active |
  | 84102d5c-1641-47bb-b727-a59e707e871c | keyshotslave-1604-snap2   | 
active |
  | cedf9ae7-6adc-44d4-b7cb-d5664ea3fef0 | keyshotslave1604-snap1| 
active |
  | be4dbd67-d56f-41dd-8378-8aa6ca064f55 | mm-cirros-test| 
active |
  | be67cf99-b545-4a91-a3d8-fe9f26a8854d | mm-cirros-test2   | 
active |
  | a8dfd028-5911-4178-a77d-bb3da8996372 | mm-test-image4| 
active |
  | b6d9d44d-2e3c-48a9-9bf5-b6fca20979f9 | testt2-snap   | 
active |
  | 1c401eea-0e6e-475b-9a46-ffbfb388ca35 | ubuntu1804-180919 | 
active |
  
+--+---++
  [root@vm013 ~]# openstack image show cirros
  
+--+-+
  | Field| Value



   |
  
+--+---

[Yahoo-eng-team] [Bug 1927926] Re: Ha port is not cleared when cleanup l3router ns

2021-05-11 Thread fanxiujian
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1927926

Title:
  Ha port is not cleared when cleanup l3router ns

Status in neutron:
  Invalid

Bug description:
  Before deleting RouterNamespace, the related device will be delted.
  But only the prefixes 'rfp-' and 'qr-' and 'qg-' are deleted, 'ha-' has not 
been deleted.

  Because the ha device is not deleted, fd will continue to increase, If
  the time is too long, ovs will fail due to the limit of fd.

  
  class RouterNamespace(Namespace):

  def __init__(self, router_id, agent_conf, driver, use_ipv6):
  self.router_id = router_id
  name = self._get_ns_name(router_id)
  super(RouterNamespace, self).__init__(
  name, agent_conf, driver, use_ipv6)

  @classmethod
  def _get_ns_name(cls, router_id):
  return build_ns_name(NS_PREFIX, router_id)

  @check_ns_existence
  def delete(self):
  ns_ip = ip_lib.IPWrapper(namespace=self.name)
  for d in ns_ip.get_devices():
  if d.name.startswith(INTERNAL_DEV_PREFIX):
  # device is on default bridge
  self.driver.unplug(d.name, namespace=self.name,
 prefix=INTERNAL_DEV_PREFIX)
  elif d.name.startswith(ROUTER_2_FIP_DEV_PREFIX):
  ns_ip.del_veth(d.name)
  elif d.name.startswith(EXTERNAL_DEV_PREFIX):
  self.driver.unplug(
  d.name,
  namespace=self.name,
  prefix=EXTERNAL_DEV_PREFIX)

  super(RouterNamespace, self).delete()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1927926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674846] Re: using glance v2 api does not remove temporary files

2021-05-11 Thread Vishal Manchanda
hi, I tried to reproduce this bug on the master branch but not succeed.
I think it is already fixed by [1].
So when you try to create an image using django(leagcy) way, it will create a
temporary file which will be deleted once the upload is completed[2].


[1] https://review.opendev.org/c/openstack/horizon/+/703632
[2] 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/glance.py#L517

If you still face the same issue, please add more steps to reproduce it.

** Changed in: horizon
   Status: New => Incomplete

** No longer affects: horizon

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1674846

Title:
  using glance v2 api does not remove temporary files

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  Currently if you are using Glance v2 with TemporaryUploadedFile
  (legacy mode?) the temporary file created on disk is never removed.
  This will eventually cause the machine to run out of tmp disk space.

  The issue is that if Glance v2 is used, the code never calls image_update 
which is responsible for deleting the temporary file.
  
https://github.com/openstack/horizon/blob/446e5aefb4354c9092d1cbc5ff258ee74558e769/openstack_dashboard/api/glance.py#L439
  
https://github.com/openstack/horizon/blob/446e5aefb4354c9092d1cbc5ff258ee74558e769/openstack_dashboard/api/glance.py#L349

  Either the function image_update should always be called, or if data
  is a TemporaryUploadedFile object, the call should always try to
  delete the temporary file once done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1674846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799588] Re: default paste_deploy.flavor is none, but config file text implies it is 'keystone' (was: non-admin users can see all tenants' images even when image is private)

2021-05-11 Thread Erno Kuvaja
The fix for this was released back in 2018 it seems. Closing the bug.

** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1799588

Title:
  default paste_deploy.flavor is none, but config file text implies it
  is 'keystone' (was: non-admin users can see all tenants' images even
  when image is private)

Status in Glance:
  Fix Released
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  [root@vm013 glance]# cat /etc/redhat-release
  CentOS Linux release 7.5.1804 (Core)
  [root@vm013 glance]# rpm -qa |grep glance |sort
  openstack-glance-16.0.1-1.el7.noarch
  openstack-glance-doc-16.0.1-1.el7.noarch
  python2-glanceclient-2.10.0-1.el7.noarch
  python2-glance-store-0.23.0-1.el7.noarch
  python-glance-16.0.1-1.el7.noarch
  python-glanceclient-doc-2.10.0-1.el7.noarch
  [root@vm013 glance]# md5sum /etc/glance/policy.json
  a4f29d0f75bbc04f1d83a1abdf0fda6f  /etc/glance/policy.json

  I am running only Glance v2 API.

  In this demo, as an un-privileged user, I will list all glance images,
  from all tenants, and they are all marked 'private'.

  (as admin):
  [root@vm013 ~]# openstack role assignment list --effective --names |grep 
jonathan
  | user| jonathan@Default|   | ozoneaq@ndc| | 
False |

  (as jonathan):
  [root@vm013 ~]# . keystonerc_jonathan
  [root@vm013 ~]# printenv |grep OS_ |sort
  OS_AUTH_URL=https://keystone.gpcprod:5000/v3
  OS_CACERT=/etc/openldap/cacerts/gpcprod_root_ca.pem
  OS_IDENTITY_API_VERSION=3
  OS_PASSWORD=XX
  OS_PROJECT_DOMAIN_NAME=NDC
  OS_PROJECT_NAME=ozoneaq
  OS_USER_DOMAIN_NAME=Default
  OS_USERNAME=jonathan
  OS_VOLUME_API_VERSION=3

  [root@vm013 ~]# openstack image list
  
+--+---++
  | ID   | Name  | 
Status |
  
+--+---++
  | 0099a343-1376-49f4-85f9-795624fb2ce8 | CentOS-7-x86_64-GenericCloud-1808 | 
active |
  | 53d7c007-318b-4dad-b7cb-38b1dd31f884 | Ubuntu1604-180919 | 
active |
  | 482f52ca-e56c-4555-a0e3-93eb491db389 | Ubuntu1604-20181016   | 
active |
  | 212aaf3c-18f6-4327-8a11-c726c2e21780 | Ubuntu1804-20181016   | 
active |
  | 051d2fff-6b90-4321-9c64-c613f0ddf3da | Windows2016Std-20181003r4 | 
active |
  | ac6baa7c-fd2f-48e2-84e0-37a86f623e38 | Windows2016std-20181003r2 | 
active |
  | 2264c6b9-40e7-492d-a5bc-dd11a7b4ee10 | Windows2016std-20181004   | 
active |
  | 6d865748-ae7a-4c43-9d01-bc35c9002fd9 | Windows2016std-20181004r2 | 
active |
  | 26ba1766-aa67-4b1b-81cd-90dda8d41384 | WindowsServer2016-20180926| 
active |
  | 3fc3c155-c7a2-4556-a5d0-de7eff208d7d | WindowsStd2016-20181010   | 
active |
  | b6d161ca-e03b-46c5-95a0-5fe31723c5c7 | centos7-201810100 | 
active |
  | 8bdc33be-1eb5-429b-b0ca-682b24df45f0 | centos7-gi-build-test1| 
active |
  | 34a915b8-cca6-45c3-9348-5e15dace444f | cirros| 
active |
  | 84102d5c-1641-47bb-b727-a59e707e871c | keyshotslave-1604-snap2   | 
active |
  | cedf9ae7-6adc-44d4-b7cb-d5664ea3fef0 | keyshotslave1604-snap1| 
active |
  | be4dbd67-d56f-41dd-8378-8aa6ca064f55 | mm-cirros-test| 
active |
  | be67cf99-b545-4a91-a3d8-fe9f26a8854d | mm-cirros-test2   | 
active |
  | a8dfd028-5911-4178-a77d-bb3da8996372 | mm-test-image4| 
active |
  | b6d9d44d-2e3c-48a9-9bf5-b6fca20979f9 | testt2-snap   | 
active |
  | 1c401eea-0e6e-475b-9a46-ffbfb388ca35 | ubuntu1804-180919 | 
active |
  
+--+---++
  [root@vm013 ~]# openstack image show cirros
  
+--+-+
  | Field| Value



   |
  
+--+--

[Yahoo-eng-team] [Bug 1928063] [NEW] SEV enabled instance unable to hard reboot

2021-05-11 Thread Lee Yarwood
Public bug reported:

Description
===

Hard rebooting a SEV enabled instance fails with a NotImplementedError
raised as the image_meta stashed in the system_metadata of the instance
doesn't contain the image name or id:

2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 3739, in 
_reboot_instance
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server 
bad_volumes_callback=bad_volumes_callback)
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3292, in 
reboot
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server block_device_info)
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3386, in 
_hard_reboot
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server mdevs=mdevs)
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 6331, in 
_get_guest_xml
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server context, mdevs)
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 5949, in 
_get_guest_config
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server flavor, 
image_meta)
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 5504, in 
_get_guest_memory_backing_config
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server if 
self._sev_enabled(flavor, image_meta):
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 6117, in 
_sev_enabled
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server mach_type)
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/virt/hardware.py", line 1271, in 
get_mem_encryption_constraint
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server image_meta.name)
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 67, in 
getter
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server 
self.obj_load_attr(name)
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_versionedobjects/base.py", line 603, in 
obj_load_attr
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server _("Cannot load 
'%s' in the base class") % attrname)
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server NotImplementedError: 
Cannot load 'name' in the base class
2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server

Steps to reproduce
==

Hard reboot a SEV enabled instance.


Expected result
===

Instance hard reboots as expected.

Actual result
=

Instance fails to reboot with a NotImplementedError exception raised.


Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

   stable/train but likely the same on master.

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   libvirt + KVM
  
2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   N/A

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   N/A

Logs & Configs
==

** Affects: nova
 Importance: Undecided
 Assignee: Lee Yarwood (lyarwood)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1928063

Title:
  SEV enabled instance unable to hard reboot

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  Hard rebooting a SEV enabled instance fails with a NotImplementedError
  raised as the image_meta stashed in the system_metadata of the
  instance doesn't contain the image name or id:

  2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 3739, in 
_reboot_instance
  2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server 
bad_volumes_callback=bad_volumes_callback)
  2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3292, in 
reboot
  2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server 
block_device_info)
  2021-05-10 16:50:24.847 7 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova

[Yahoo-eng-team] [Bug 1927494] Re: [designate] admin_* options are based on v2 identity instead of v3

2021-05-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/790080
Committed: 
https://opendev.org/openstack/neutron/commit/adfd853267ca529816f4c17a145d9e70e8abfac5
Submitter: "Zuul (22348)"
Branch:master

commit adfd853267ca529816f4c17a145d9e70e8abfac5
Author: Takashi Kajinami 
Date:   Thu May 6 23:41:04 2021 +0900

Deprecate [designate] admin_* parameters

The admin_* parameters are implementing the same functionality as
keystoneauth parameters alghouth these don't provide all parameters for
Keystone v3 identity but are still based on Keystone v2 identity.
This change deprecates these parameters so that we can remove
such redundant and outdated definitions in a future release.

Closes-Bug: #1927494
Change-Id: I6294098008fbebb2e64922b3aaa085c1361d48a2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1927494

Title:
  [designate] admin_* options are based on v2 identity instead of v3

Status in neutron:
  Fix Released

Bug description:
  Currently neutron accepts the following parameters to set up identity
  for connecting to designate in admin context, but the list doesn't
  include domain parameters and it seems these parameters are based on
  keystone v2 indentity instead of keystone v3 identity.

  - admin_username
  - admin_password
  - admin_tenant_id
  - admin_tenant_name
  - admin_auth_url'

  Also, these parameters are a kind of duplicates of keystoneauth
  parameters and there is no clear reason why we need to define and use
  these parameters in neutron layer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1927494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1849098] Re: ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port

2021-05-11 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1849098

Title:
  ovs agent is stuck with OVSFWTagNotFound when dealing with unbound
  port

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive queens series:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Bionic:
  Fix Committed

Bug description:
  [Impact]

  somehow port is unbounded, then neutron-openvswitch-agent raise
  OVSFWTagNotFound, then creating new instance will be failed.

  [Test Plan]
  1. deploy bionic openstack env
  2. launch one instance
  3. modify neutron-openvswitch-agent code inside nova-compute
  - https://pastebin.ubuntu.com/p/nBRKkXmjx8/
  4. restart neutron-openvswitch-agent
  5. check if there are a lot of cannot get tag for port ..
  6. launch another instance.
  7. It fails after vif_plugging_timeout, with "virtual interface creation 
failed"

  [Where problems could occur]
  while no regressions are expected, if they do occur it would be when getting 
or creating vif port

  [Others]

  Original description.

  neutron-openvswitch-agent meets unbound port:

  2019-10-17 11:32:21.868 135 WARNING
  neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-
  aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Device
  ef34215f-e099-4fd0-935f-c9a42951d166 not defined on plugin or binding
  failed

  Later when applying firewall rules:

  2019-10-17 11:32:21.901 135 INFO neutron.agent.securitygroups_rpc 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Preparing filters for 
devices {'ef34215f-e099-4fd0-935f-c9a42951d166', 
'e9c97cf0-1a5e-4d77-b57b-0ba474d12e29', 'fff1bb24-6423-4486-87c4-1fe17c552cca', 
'2e20f9ee-bcb5-445c-b31f-d70d276d45c9', '03a60047-cb07-42a4-8b49-619d5982a9bd', 
'a452cea2-deaf-4411-bbae-ce83870cbad4', '79b03e5c-9be0-4808-9784-cb4878c3dbd5', 
'9b971e75-3c1b-463d-88cf-3f298105fa6e'}
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Error while processing VIF 
ports: neutron.agent.linux.openvswitch_firewall.exceptions.OVSFWTagNotFound: 
Cannot get tag for port o-hm0 from its other_config: {}
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 530, in get_or_create_ofport
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent of_port = 
self.sg_port_map.ports[port_id]
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 
'ef34215f-e099-4fd0-935f-c9a42951d166'
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 81, in get_tag_from_other_config
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return 
int(other_config['tag'])
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 'tag'
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2280, in rpc_loop
  2019-10-17 11:32:21.906 135 ERROR 
neut

[Yahoo-eng-team] [Bug 1927249] Re: Neutron_Tempest_Plugin: Create Fake Network for Negative Neutron Test cases

2021-05-11 Thread Martin Kopec
Moving to neutron as it's related to the neutron-tempest-plugin.

** Project changed: tempest => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1927249

Title:
  Neutron_Tempest_Plugin: Create Fake Network for Negative Neutron Test
  cases

Status in neutron:
  New

Bug description:
  There are some test cases which are modifying the test network UUID:

  https://github.com/openstack/neutron-tempest-
  
plugin/blob/5ad4e821006b9ae2bbd5aee18f47b8764a2e2f9c/neutron_tempest_plugin/api/test_ports_negative.py#L56

  Such updates are causing network leaks (Not deleted)
  The clean up deletes incorrect network ID and the actual network is left 
behind.

  Update the logic to deep-copy the Network Object/Restore the network
  ID after the test case is run

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1927249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1927249] [NEW] Neutron_Tempest_Plugin: Create Fake Network for Negative Neutron Test cases

2021-05-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

There are some test cases which are modifying the test network UUID:

https://github.com/openstack/neutron-tempest-
plugin/blob/5ad4e821006b9ae2bbd5aee18f47b8764a2e2f9c/neutron_tempest_plugin/api/test_ports_negative.py#L56

Such updates are causing network leaks (Not deleted)
The clean up deletes incorrect network ID and the actual network is left behind.

Update the logic to deep-copy the Network Object/Restore the network ID
after the test case is run

** Affects: neutron
 Importance: Undecided
 Assignee: Sam Kumar (sp810x)
 Status: New

-- 
Neutron_Tempest_Plugin: Create Fake Network for Negative Neutron Test cases
https://bugs.launchpad.net/bugs/1927249
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp