[Yahoo-eng-team] [Bug 1921399] Re: check_instance_shared_storage RPC call is broken

2021-03-25 Thread Balazs Gibizer
Fix merged to master, and will be part of Wallaby RC1.

** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova
Milestone: xena-rc1 => wallaby-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1921399

Title:
  check_instance_shared_storage RPC call is broken

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We broke check_instance_shared_storage() in this change:

  
https://review.opendev.org/c/openstack/nova/+/761452/13..15/nova/compute/rpcapi.py

  Where we re-ordered the rpcapi client signature without adjusting the
  caller. This leads to this failure:

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef] Traceback (most recent call last):

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef]   File
  "/opt/stack/new/nova/nova/compute/manager.py", line 797, in
  _is_instance_storage_shared

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef] instance, data, host=host))

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef]   File
  "/opt/stack/new/nova/nova/compute/rpcapi.py", line 618, in
  check_instance_shared_storage

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef] return cctxt.call(ctxt,
  'check_instance_shared_storage', **msg_args)

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
  packages/oslo_messaging/rpc/client.py", line 179, in call

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef]
  transport_options=self.transport_options)

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
  packages/oslo_messaging/transport.py", line 128, in _send

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef] transport_options=transport_options)

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
  packages/oslo_messaging/_drivers/amqpdriver.py", line 682, in send

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef] transport_options=transport_options)

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
  packages/oslo_messaging/_drivers/amqpdriver.py", line 672, in _send

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef] raise result

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef] AttributeError: 'Instance' object has no
  attribute 'filename'

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef] Traceback (most recent call last):

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef]

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
  packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming

  Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006
  nova-compute[8570]: ERROR nova.compute.manager [instance: 20d48d76
  -f93c-4b3c-90a8-cd7f654b28ef] res =
  self.dispatcher.dispatch(message)

  Mar 25 13:46:28.041587 ubuntu-bio

[Yahoo-eng-team] [Bug 1921448] [NEW] Release Note containing possible typo

2021-03-25 Thread Yuko Katabami
Public bug reported:

While translating, I've come across the following sentence which I could
not comprehend:

``Users`` tab displaying all users which have roles on the project (and
their roles on it), including users which have roles on the project
throw their membership to a group.

Found the source here:
https://github.com/openstack/horizon/blob/73f6dcebd5d81133f2604fa224f572e46da443df/releasenotes/notes/bug-1785263-46edf7313d833b4c.yaml

Is "throw" possibly a typo for "through"?
If so, my interpretation is as below. Could you please confirm if I understand 
it correctly?

``Users`` tab now displays all users who have roles in the project, as
well as their roles in the project. This includes users who have roles
in the project through their membership to a group.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: documentation

** Description changed:

- While translating, I've came across the following sentence which I could
+ While translating, I've come across the following sentence which I could
  not comprehend:
  
  ``Users`` tab displaying all users which have roles on the project (and
  their roles on it), including users which have roles on the project
  throw their membership to a group.
  
  Found the source here:
  
https://github.com/openstack/horizon/blob/73f6dcebd5d81133f2604fa224f572e46da443df/releasenotes/notes/bug-1785263-46edf7313d833b4c.yaml
  
  Is "throw" possibly a typo for "through"?
  If so, my interpretation is as below. Could you please confirm if I 
understand it correctly?
  
  ``Users`` tab now displays all users who have roles in the project, as
  well as their roles in the project. This includes users who have roles
  in the project through their membership to a group.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1921448

Title:
  Release Note containing possible typo

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  While translating, I've come across the following sentence which I
  could not comprehend:

  ``Users`` tab displaying all users which have roles on the project
  (and their roles on it), including users which have roles on the
  project throw their membership to a group.

  Found the source here:
  
https://github.com/openstack/horizon/blob/73f6dcebd5d81133f2604fa224f572e46da443df/releasenotes/notes/bug-1785263-46edf7313d833b4c.yaml

  Is "throw" possibly a typo for "through"?
  If so, my interpretation is as below. Could you please confirm if I 
understand it correctly?

  ``Users`` tab now displays all users who have roles in the project, as
  well as their roles in the project. This includes users who have roles
  in the project through their membership to a group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1921448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921429] [NEW] Ubuntu Pro 18.04 on Azure does not configure azure.archive in sources.list

2021-03-25 Thread Aaron Whitehouse
Public bug reported:

On "Ubuntu Server 18.04 LTS - Gen 1" or "Ubuntu Server 18.04 LTS - Gen
2" on Azure, the sources.list is correctly modified to use the Azure
archive mirrors:


## Note, this file is written by cloud-init on first boot of an instance
[...] 
deb http://azure.archive.ubuntu.com/ubuntu/ bionic main restricted
[...]
deb http://azure.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
[...]

On "Ubuntu Pro 18.04 LTS - Gen 2" (Publisher canonical Offer 
0001-com-ubuntu-pro-bionic Plan
pro-18_04-lts-gen2 VM generation V2), there is no Note at the top sources.list 
saying that it has been written by cloud-init and it has not been correctly 
updated:

deb http://archive.ubuntu.com/ubuntu/ bionic main restricted
[...]
deb http://archive.ubuntu.com/ubuntu/ bionic-updates main restricted
[...]


East US (Zone 1) in both cases.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "Ubuntu Server 18.04 LTS - Gen 1"
   
https://bugs.launchpad.net/bugs/1921429/+attachment/5481010/+files/18-04-azure-sources.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1921429

Title:
  Ubuntu Pro 18.04 on Azure does not configure azure.archive in
  sources.list

Status in cloud-init:
  New

Bug description:
  On "Ubuntu Server 18.04 LTS - Gen 1" or "Ubuntu Server 18.04 LTS - Gen
  2" on Azure, the sources.list is correctly modified to use the Azure
  archive mirrors:

  
  ## Note, this file is written by cloud-init on first boot of an instance
  [...] 
  deb http://azure.archive.ubuntu.com/ubuntu/ bionic main restricted
  [...]
  deb http://azure.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
  [...]

  On "Ubuntu Pro 18.04 LTS - Gen 2" (Publisher canonical Offer 
0001-com-ubuntu-pro-bionic Plan
  pro-18_04-lts-gen2 VM generation V2), there is no Note at the top 
sources.list saying that it has been written by cloud-init and it has not been 
correctly updated:

  deb http://archive.ubuntu.com/ubuntu/ bionic main restricted
  [...]
  deb http://archive.ubuntu.com/ubuntu/ bionic-updates main restricted
  [...]

  
  East US (Zone 1) in both cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1921429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921414] [NEW] Designate PTR record creation results in in-addr.arpa. zone owned by invalid project ID

2021-03-25 Thread Drew Freiberger
Public bug reported:

When Neutron is creating PTR records during Floating IP attachment on
Stein, we have witnessed the resultant new X.Y.Z.in-addr.arpa. zone is
owned by project ID ----.

This creates issues for record updates for future FIP attachments from
Neutron resulting in API errors.

Workaround is to change the project-ID to the services project_id in the
services_domain.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1921414

Title:
  Designate PTR record creation results in in-addr.arpa. zone owned by
  invalid project ID

Status in neutron:
  New

Bug description:
  When Neutron is creating PTR records during Floating IP attachment on
  Stein, we have witnessed the resultant new X.Y.Z.in-addr.arpa. zone is
  owned by project ID ----.

  This creates issues for record updates for future FIP attachments from
  Neutron resulting in API errors.

  Workaround is to change the project-ID to the services project_id in
  the services_domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1921414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921399] [NEW] check_instance_shared_storage RPC call is broken

2021-03-25 Thread Dan Smith
Public bug reported:

We broke check_instance_shared_storage() in this change:

https://review.opendev.org/c/openstack/nova/+/761452/13..15/nova/compute/rpcapi.py

Where we re-ordered the rpcapi client signature without adjusting the
caller. This leads to this failure:

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] Traceback (most recent call last):

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]   File "/opt/stack/new/nova/nova/compute/manager.py",
line 797, in _is_instance_storage_shared

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] instance, data, host=host))

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]   File "/opt/stack/new/nova/nova/compute/rpcapi.py",
line 618, in check_instance_shared_storage

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] return cctxt.call(ctxt,
'check_instance_shared_storage', **msg_args)

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
packages/oslo_messaging/rpc/client.py", line 179, in call

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] transport_options=self.transport_options)

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
packages/oslo_messaging/transport.py", line 128, in _send

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] transport_options=transport_options)

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
packages/oslo_messaging/_drivers/amqpdriver.py", line 682, in send

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] transport_options=transport_options)

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
packages/oslo_messaging/_drivers/amqpdriver.py", line 672, in _send

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] raise result

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] AttributeError: 'Instance' object has no attribute
'filename'

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] Traceback (most recent call last):

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] res = self.dispatcher.dispatch(message)

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef]   File "/usr/local/lib/python3.6/dist-
packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch

Mar 25 13:46:28.041587 ubuntu-bionic-vexxhost-ca-ymq-1-0023683006 nova-
compute[8570]: ERROR nova.compute.manager [instance: 20d48d76-f93c-4b3c-
90a8-cd7f654b28ef] return self._do_dispa

[Yahoo-eng-team] [Bug 1108979] Re: v1 headers are not decoded

2021-03-25 Thread Abhishek Kekane
Glance does not support V1 and its has been removed since Ussuri.

** Changed in: glance
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1108979

Title:
  v1 headers are not decoded

Status in Glance:
  Won't Fix

Bug description:
  In the v1.1 API, metadata headers are assumed to be ASCII-only (which
  is the case in python-glanceclient currently - see bug 1108969).
  However, in theory they could be encoded in a number of ways. If bug
  1008969 is to be fixed, glance needs to support decoding the headers
  using the MIME header encoding rules from RFC 2047
  (http://www.ietf.org/rfc/rfc2047.txt ).

  For reference the format of the header field contents is defined in
  section 4.2 of RFC 2616:

 field-content  = 

  http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2

  ...which must be further interpreted using section 2.2:

The TEXT rule is only used for descriptive field contents and values
that are not intended to be interpreted by the message parser. Words
of *TEXT MAY contain characters from character sets other than
ISO-8859-1 only when encoded according to the rules of RFC 2047.

 TEXT   = 

  http://www.w3.org/Protocols/rfc2616/rfc2616-sec2.html#sec2.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1108979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450041] Re: Glance v1 api returns 500 on NotAuthenticated in registry

2021-03-25 Thread Abhishek Kekane
Glance does not support V1 and its has been removed since Ussuri.

** Changed in: glance
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1450041

Title:
  Glance v1 api returns 500 on NotAuthenticated in registry

Status in Glance:
  Won't Fix

Bug description:
  If some operation with an image (v1 create/show/delete) fails due to
  keystone token expiration during a glance-registry operation, glance-
  api returns 500 InternalServerError as NotAuthenticated exception is
  not expected in any of the api methods.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1450041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471215] Re: Need to add url encode before give response to image show for v1

2021-03-25 Thread Abhishek Kekane
Glance does not support V1 and its has been removed since Ussuri.

** Changed in: glance
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1471215

Title:
  Need to add url encode before give response to image show for v1

Status in Glance:
  Won't Fix

Bug description:
  If we use http proxy ip like a glance endpoint and add metadata for image in 
bug: https://bugs.launchpad.net/horizon/+bug/1449260
  then http response have wrong HTTP type.

  reproduced on branch: stable/juno

  Steps to reproduce:
  Precondition steps:
  httpproxy glance endpoint: http://192.168.0.2:9292
  host ip with runned glance-api: 192.168.0.6

  Step 1. Create glance image, e.g.:
  glance image-create --name test --id  --disk-format qcow2 
--container-format bare --file 
  response: http://paste.openstack.org/show/338516/

  Step 2. Show glance image using v1 glance api and using curl:
  glance --debug --os-image-api-version 1 image-show 
  response: http://paste.openstack.org/show/338515/

  curl -v -i -X HEAD -H 'X-Auth-Token: ' 
http://192.168.0.2:9292/v1/images/
  response: http://paste.openstack.org/show/338528/

  Step 3. Add metadata like in bug: 
https://bugs.launchpad.net/horizon/+bug/1449260 using horizon or python v2 
glance client. e.g.:
  cat glance_add_meta.py: http://paste.openstack.org/show/338529/)
  python glance_add_meta.py

  Step 4. Repead step 2.

  Expected result: GET 200 responce

  Actual result Responses: 502 Bad Gateway:

  curl -v -i -X HEAD -H 'X-Auth-Token: ' 
http://192.168.0.2:9292/v1/images/
  response: http://paste.openstack.org/show/338531/

  glance --debug --os-image-api-version 1 image-show 
  response: http://paste.openstack.org/show/338533/

  --

  If we use for CURl request host ip(bypass httpproxy glance endpoint) we have 
200 OK.
  curl -v -i -X HEAD -H 'X-Auth-Token: ' 
http://192.168.0.6:9292/v1/images/
  response: http://paste.openstack.org/show/338535/

  If we use for CLI request os-image-url like a host ip (bypass httpproxy 
glance endpoint) we have 200 OK.
  http://paste.openstack.org/show/338561/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1471215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556023] Re: Direct v1 registry access can bypass Glance's policies

2021-03-25 Thread Abhishek Kekane
Glance does not support V1 and its has been removed since Ussuri.

** Changed in: glance
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1556023

Title:
  Direct v1 registry access can bypass Glance's policies

Status in Glance:
  Won't Fix
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  If a non-admin user can access the registry directly, then they can
  bypass Glance's policies.

  Here, for example, is a registry request which bypasses both the
  policy to mark an image as public, and to set the image location
  directly:

   PUT /images/37d89430-8bf2-433a-843e-909c752866df HTTP/1.1.
   Host: 127.0.0.1:9191.
   Content-Length: 606.
   Accept-Encoding: gzip, deflate.
   Accept: application/json.
   x-auth-token: dc9e09e4954d4b42983784b3c4642bd9.
   Connection: keep-alive.
   User-Agent: restfuzz-0.1.0.
   Content-Type: application/json.
   .

   {"image": {"status": "active", "deleted": false, "name":
  "testpublic", "container_format": "bare", "min_ram": 2147483647,
  "disk_format": "qcow2", "id": "37d89430-8bf2-433a-843e-909c752866df",
  "owner": "48c21395db63405d94aee1f965615d1c", "min_disk": 2147483647,
  "is_public": true, "properties": {"image_type": "snapshot",
  "instance_uuid": "7df74ad1-1caf-44ac-8f4b-4313f5fda5ed", "user_id":
  "76b4ded518594216832e06c261523074' or 1=1--", "base_image_ref":
  "1c8c3ba8-3a2f-4d06-b1ba-ac1791b599d8"}, "size": 6599958588555,
  "virtual_size": 6599958588551, "min_disk": 2147483647,
  "location":"http://google.com"}}

  Note that deployments should firewall the registry off; typical users should 
only have access to the Glance API endpoint.
  However, users such as a Swift administrator who does not have Glance admin 
powers but is able to access the 'private' network can bypass Glance's policies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1556023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888395] Re: live migration of a vm using the single port binding work flow is broken in train as a result of the introduction of sriov live migration

2021-03-25 Thread Robie Basak
I see that this code change already exists in Ubuntu Hirsute, so I'm
setting that task Fix Released.

> rewriting the bug desciption to follow the downstream bug template
could cause confution in some cases so that might be better to keep in a
comment

FWIW, from the Ubuntu SRU team perspective I think it'd be absolutely
fine for the information to be in a comment rather than the bug
description if upstream would prefer that.

** Changed in: nova (Ubuntu)
   Status: New => Fix Released

** Changed in: nova (Ubuntu Focal)
   Status: Triaged => Fix Committed

** Tags added: verification-needed verification-needed-focal

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1888395

Title:
  live migration of a vm using the single port binding work flow is
  broken in train as a result of the introduction of sriov live
  migration

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  Fix Released
Status in networking-opencontrail:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  In Progress
Status in OpenStack Compute (nova) ussuri series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Focal:
  Fix Committed
Status in nova source package in Groovy:
  Fix Released

Bug description:
  [Impact]

  Live migration of instances in an environment that uses neutron
  backends that do not support multiple port bindings will fail with
  error 'NotImplemented', effectively rendering live-migration
  inoperable in these environments.

  This is fixed by first checking to ensure the backend supports the
  multiple port bindings before providing the port bindings.

  [Test Plan]

  1. deploy a Train/Ussuri OpenStack cloud w/ at least 2 compute nodes
  using an SDN that does not support multiple port bindings (e.g.
  opencontrail).

  2. Attempt to perform a live migration of an instance.

  3. Observe that the live migration will fail without this fix due to
  the trace below (NotImplementedError: Cannot load 'vif_type' in the
  base class), and should succeed with this fix.

  
  [Where problems could occur]

  This affects the live migration code, so likely problems would arise
  in this area. Specifically, the check introduced is guarding
  information provided for instances using SR-IOV indirect migration.

  Regressions would likely occur in the form of live migration errors
  around features that rely on the multiple port bindings (e.g. the SR-
  IOV) and not the more generic/common use case. Errors may be seen in
  standard network providers that are included with distro packaging,
  but may also be seen in scenarios where proprietary SDNs are used.

  
  [Original Description]
  it was working in queens but fails in train. nova compute at the target 
aborts with the exception:

  Traceback (most recent call last):
    File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
165, in _process_incoming
  res = self.dispatcher.dispatch(message)
    File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 274, in dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
    File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 194, in _do_dispatch
  result = func(ctxt, **new_args)
    File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 79, 
in wrapped
  function_name, call_dict, binary, tb)
    File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__
  self.force_reraise()
    File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
  six.reraise(self.type_, self.value, self.tb)
    File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 69, 
in wrapped
  return f(self, context, *args, **kw)
    File "/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1372, 
in decorated_function
  return function(self, context, *args, **kwargs)
    File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 219, 
in decorated_function
  kwargs['instance'], e, sys.exc_info())
    File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__self.force_reraise()
    File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
  six.reraise(self.type_, self.value, self.tb)  File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 207, in 
decorated_function
  return function(self, context, *args, **kwargs)
    File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7007, 
in pre_live_migration
  bdm.save()
    File "/usr/lib/python2.7/

[Yahoo-eng-team] [Bug 1921388] Re: novaclient logs are logged as keystoneauth.session

2021-03-25 Thread Radomir Dopieralski
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1921388

Title:
  novaclient logs are logged as keystoneauth.session

Status in OpenStack Dashboard (Horizon):
  New
Status in python-novaclient:
  New

Bug description:
  This is possibly as old as Train, but I only just noticed this when I
  tried to debug something.

  All logs that should be logged as novaclient, instead get logged as
  keystoneauth.session — this means we can't easily configure the log
  level for novaclient, and makes it hard to search the logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1921388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921388] [NEW] novaclient logs are logged as keystoneauth.session

2021-03-25 Thread Radomir Dopieralski
Public bug reported:

This is possibly as old as Train, but I only just noticed this when I
tried to debug something.

All logs that should be logged as novaclient, instead get logged as
keystoneauth.session — this means we can't easily configure the log
level for novaclient, and makes it hard to search the logs.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1921388

Title:
  novaclient logs are logged as keystoneauth.session

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This is possibly as old as Train, but I only just noticed this when I
  tried to debug something.

  All logs that should be logged as novaclient, instead get logged as
  keystoneauth.session — this means we can't easily configure the log
  level for novaclient, and makes it hard to search the logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1921388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1892851] Re: Staged boot, to fix integration of systemd generators

2021-03-25 Thread Lukas Märdian
** Changed in: netplan
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1892851

Title:
  Staged boot, to fix integration of systemd generators

Status in cloud-init:
  Invalid
Status in netplan:
  Fix Released
Status in netplan.io package in Ubuntu:
  Fix Released

Bug description:
  [Intro]
  Cloud-init makes use of the "netplan" systemd generator, but calls "netplan 
generate" manually at runtime, while currently executing the initial systemd 
boot transaction, instead of running it as intended via "systemctl 
daemon-reload" at systemd generator stage, due to restrictions it has regarding 
fetching of its data source (e.g. netplan YAML config).

  [Problem]
  This leads to problems at first boot, as the systemd unit dependencies are 
calculated after the generator stage, but ahead of the boot transaction (e.g. 
via systemctl daemon-reload), therefore the new service units and its 
dependencies, which are generated by manually calling systemd generators are 
ignored during the first-boot transaction. In subsequent boots (where the 
cloud-init data source, netplan YAML config and unit files are already in 
place), everything works as expected.

  It is a tricky situation, as cloud-init
   1/ does not have the full config to run the systemd generators (e.g. netplan 
YAML) yet before the systemd boot transaction. It first needs to fetch it via a 
DataSource, possibly via a network connection.
   2/ cannot execute the generators manually (e.g. "netplan generate") during 
the systemd boot transaction, because this way the newly generated service 
units and corresponding dependencies will be ignored.
   3/ cannot re-execute the systemd generators after the initial boot 
transaction, as it is already too late at this point and applications expect to 
have a readily configured network setup after cloud-final.target has been 
reached.

  [References]
  Such problems have been reported and discussed for WiFi on RaspberryPi (LP: 
#1870346) or Open vSwitch setups in MAAS 
(https://github.com/CanonicalLtd/netplan/pull/157), where some of the generated 
service units/dependencies (netplan-ovs-*.service or netplan-wpa-*.service, 
possibly SR-IOV units as well...) are not properly executed on first boot.

  [Suggestion]
  A possible solution I discussed with @xnox would be to re-engineer how 
cloud-init targets work a bit, by splitting up the cloud-init boot sequence 
into multiple stages, e.g.:

  * Start "Stage 0" systemd transaction: systemctl isolate cloud-stage0.target
- execute the init local modules
- setup basic networking (DHCP on eth0/ens3)
- fetch data source & place netplan YAML in /etc/netplan/
  * Finish "Stage 0" transaction
  * Call systemctl daemon-reload
- This will trigger all systemd generators (incl. netplan generate) and 
re-calculate all dependencies
  * Start "Stage 1" systemd transaction: systemctl isolate default.target
- execute all the normal cloud-init modules and start all the normal 
services, e.g. via cloud-final.target
  * Finish "Stage 1" transaction
  * System is now fully booted

  The idea here is to split up the boot sequence into two (or more?)
  systemd transactions, so we can call "systemctl daemon-reload" in
  between (but not within a running systemd transaction) to re-run all
  the generators and re-calculate all the dependencies. This way all
  generators would be used in their intended way and should work as
  expected, even on first boot.

  Doing that would also allow users to do interesting things with
  systemd via cloud-config. Like changing the default.target from
  multiuser.target to emergency.target, adding / masking / removing
  units used in early boot, and "just write fstab" and allow systemd-
  fstab-generator to process it, and mount things, etc...

  
  ### Config used to reproduce the problem in a LXD container:
  "systemctl status netplan-ovs-ovs0.service" will show that this unit has not 
be executed on first boot.

  config:
user.network-config: |
  # cloud-config
  version: 2
  bridges:
ovs0:
  addresses: [10.10.10.20/24]
  interfaces: [eth0.21]
  parameters:
stp: false
  openvswitch: {}
  ethernets:
eth0:
  addresses: [10.10.10.30/24]
  vlans:
eth0.21:
  id: 21
  link: eth0
  description: My OVS debugging profile
  devices:
eth0:
  name: eth0
  network: lxdbr0
  type: nic
root:
  path: /
  pool: default
  type: disk
  name: myovs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1892851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/

[Yahoo-eng-team] [Bug 1916022] Re: L3HA Race condition during startup of the agent may cause inconsistent router's states

2021-03-25 Thread Edward Hope-Morley
This is already fix released all the way back to stable/train -
https://review.opendev.org/q/I2cc58c30cf844ee0ecf0611ecdec430086464790 -
so i will update LP to reflect that.

** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1916022

Title:
  L3HA Race condition during startup of the agent may cause inconsistent
  router's states

Status in neutron:
  Fix Released

Bug description:
  I observed that issue in Tobiko jobs, like e.g.
  
https://5f31a0f7dc56e4b42a89-207bd119fd0c3b58e9c78074b243256d.ssl.cf2.rackcdn.com/776284/2/check
  /devstack-tobiko-gate-
  multinode/257fd87/tobiko_results_05_verify_resources_scenario.html

  Problem with HA routers. What happens there is that when neutron-l3-agent and 
then keepalived on node which is master is killed, new node becomes master but 
VIP address isn't removed from the qrouter namespace.
  Then some other node becomes new master as keepalived on that running nodes 
did its job.
  When stopped agent is started it first calls update_initial_state() 
https://github.com/openstack/neutron/blob/90309cf6e2f3ed5ae6d5f4cca3c5351c2ac67a13/neutron/agent/l3/ha_router.py#L159
  which will enqueue state change event and may do it with "primary" state 
(it's old state from before agent and keepalived was down.
  And immediately after that, it will also spawn state change monitor. And that 
monitor will also enqueue state change event. This one may be with correct 
"backup" state already. But as there was already state "primary" scheduled to 
be processed, new one will be dropped.
  And due to that it will end up with 2 nodes in "primary" state.

  I think that calling of update_initial_state() isn't really needed as
  state change monitor is handling notification of the initial state
  always just after start of the process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1916022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865889] Re: [RFE] Routed provider networks support in OVN

2021-03-25 Thread Bernard Cafarelli
Doc is also merged, updating status on this one

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865889

Title:
  [RFE] Routed provider networks support in OVN

Status in neutron:
  Fix Released

Bug description:
  The routed provider networks feature doesn't work properly with OVN
  backend. While API doesn't return any errors, all the ports are
  allocated to the same OVN Logical Switch and besides providing no
  Layer2 isolation whatsoever, it won't work when multiple segments
  using different physnets are added to such network.

  The reason for the latter is that, currently, in core OVN, only one
  localnet port is supported per Logical Switch so only one physical net
  can be associated to it. I can think of two different approaches:

  1) Change the OVN mech driver to logically separate Neutron segments:

  a) Create an OVN Logical Switch *per Neutron segment*. This has some
  challenges from a consistency point of view as right now there's a 1:1
  mapping between a Neutron Network and an OVN Logical Switch. Revision
  numbers, maintenance task, OVN DB Sync script, etcetera.

  b) Each of those Logical Switches will have a localnet port associated
  to the physnet of the Neutron segment.

  c) The port still belongs to the parent network so all the CRUD operations 
over a port will require to figure out which underlying OVN LS applies 
(depending on which segment the port lives in).
  The same goes for other objects (e.g. OVN Load Balancers, gw ports -if 
attaching a multisegment network to a Neutron router as a gateway is a valid 
use case at all-).

  e) Deferred allocation. A port can be created in a multisegment
  Neutron network but the IP allocation is deferred to the time where a
  compute node is assigned to an instance. In this case the OVN mech
  driver might need to move around the Logical Switch Port from the
  Logical Switch of the parent to that of the segment where it falls
  (can be prone to race conditions :?).

  
  2) Core OVN changes:

  The current limitation is that right now only one localnet port is
  allowed per Logical Switch so we can't map different physnets to it.
  If we add support for multiple localnet ports in core OVN, we can have
  all the segments living in the same OVN Logical Switch.

  My idea here would be:

  a) Per each Neutron segment, we create a localnet port in the single
  OVN Logical Switch with its physnet and vlan id (if any). Eg.

  name: provnet-f7038db6-7376-4b83-b57b-3f456bea2b80
  options : {network_name=segment1}
  parent_name : []
  port_security   : []
  tag : 2016
  tag_request : []
  type: localnet

  
  name: provnet-84487aa7-5ac7-4f07-877e-1840d325e3de
  options : {network_name=segment2}
  parent_name : []
  port_security   : []
  tag : 2017
  tag_request : []
  type: localnet

  And both ports would belong to the LS corresponding to the
  multisegment Neutron network.

  b) In this case, when ovn-controller sees that a port in that network
  has been bound to it, all it needs to create is the patch port to the
  provider bridge that the bridge mappings configuration dictates.

  E.g

  compute1:bridge-mappings = segment1:br-provider1
  compute2:bridge-mappings = segment2:br-provider2

  When a port in the multisegment network gets bound to compute1, ovn-
  controller will create a patch-port between br-int and br-provider1.
  The restriction here is that on a given hypervisor, only ports
  belonging to the same segment will be present. ie. we can't mix VMs on
  different segments on the same hypervisor.

  
  c) Minor changes are required in the Neutron side (just creating the localnet 
port upon segment creation).

  
  We need to discuss if the restriction mentioned earlier makes sense. If not, 
perhaps we need to drop this approach completely or look for core OVN 
alternatives.

  
  I'd lean on approach number 2 as it seems the less invasive in terms of code 
changes but there's the catch described that may make it a no-go or explore 
another ways to eliminate that restriction somehow in core OVN.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1865889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp