[Yahoo-eng-team] [Bug 1837339] Re: CIDR's of the form 12.34.56.78/0 should be an error

2020-04-01 Thread Jeremy Stanley
Per Tristan's suggestion, the VMT will treat this as a security
hardening opportunity, no advisory needed.

** Changed in: ossa
   Status: Incomplete => Won't Fix

** Information type changed from Public Security to Public

** Tags added: security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837339

Title:
  CIDR's of the form 12.34.56.78/0 should be an error

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in neutron:
  New
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  The problem is that some users do not understand how CIDRs work, and
  incorrectly use /0 when they are trying to specify a single IP or a
  subnet in an Access Rule.  Unfortunately 12.34.56.78/0 means the same
  thing as 0.0.0.0/0.

  The proposed fix is to insist that /0 only be used with 0.0.0.0/0 and
  the IPv6 equivalent ::/0 when entering or updating Access Rule CIDRs
  in via the dashboard.

  I am labeling this as a security vulnerability since it leads to naive
  users creating instances with ports open to the world when they didn't
  intend to do that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1837339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870110] Re: neutron-rally-task fails in rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks

2020-04-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/716562
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a90654ae5a8f4cc8588f0f595af091f8cd986441
Submitter: Zuul
Branch:master

commit a90654ae5a8f4cc8588f0f595af091f8cd986441
Author: Bence Romsics 
Date:   Wed Apr 1 13:30:10 2020 +0200

Revert "Subcribe trunk & subport events to set subport id"

This reverts commit 8ebc635a18fd23fd6595551a703961a4d4392948.

The reverted commit does mass update on all subports of a trunk.
This is not in line with the original design since it causes huge
api-side performance effects.

I think that's the reason why we started seeing gate failures of
rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks
in neutron-rally-task.

Change-Id: I6f0fd91c62985207af8dbf29aae463b2b478d5d2
Closes-Bug: #1870110
Related-Bug: #1700428


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870110

Title:
  neutron-rally-task fails in
  rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks

Status in neutron:
  Fix Released

Bug description:
  It seems we have a gate failure in neutron-rally-task. It fails in
  rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks. For
  example:

  
https://zuul.opendev.org/t/openstack/build/9c9970da456d4145a174f73c90529dd2/log/job-output.txt#41274
  
https://zuul.opendev.org/t/openstack/build/8319cc946cc9407a90467f68757c11e8/log/job-output.txt#41269

  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%229696%2Fv2.0%2Ftrunks%20timed%20out%5C%22%20AND%20voting:1&from=864000s

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1870110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837339] Re: CIDR's of the form 12.34.56.78/0 should be an error

2020-04-01 Thread Sam Morrison
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837339

Title:
  CIDR's of the form 12.34.56.78/0 should be an error

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in neutron:
  New
Status in OpenStack Security Advisory:
  Incomplete
Status in OpenStack Security Notes:
  New

Bug description:
  The problem is that some users do not understand how CIDRs work, and
  incorrectly use /0 when they are trying to specify a single IP or a
  subnet in an Access Rule.  Unfortunately 12.34.56.78/0 means the same
  thing as 0.0.0.0/0.

  The proposed fix is to insist that /0 only be used with 0.0.0.0/0 and
  the IPv6 equivalent ::/0 when entering or updating Access Rule CIDRs
  in via the dashboard.

  I am labeling this as a security vulnerability since it leads to naive
  users creating instances with ports open to the world when they didn't
  intend to do that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1837339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870228] [NEW] cloud-init metadata fallback broken

2020-04-01 Thread James Denton
Public bug reported:

I came across an issue today for a user that was experiencing issues
connecting to metadata at 169.254.169.254. For a long time, cloud-init
has had a fallback mechanism to that allowed it to contact the metadata
service at http:///latest/meta-data if
http://169.254.169.254/latest/meta-data were unavailable, like so:

[  157.574921] cloud-init[1313]: 2020-03-31 09:53:24,158 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries 
exceeded with url: /2009-04-04/meta-data/instance-id (Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
[  208.629083] cloud-init[1313]: 2020-03-31 09:54:15,214 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries 
exceeded with url: /2009-04-04/meta-data/instance-id (Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
[  226.639267] cloud-init[1313]: 2020-03-31 09:54:33,224 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries 
exceeded with url: /2009-04-04/meta-data/instance-id (Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=17.0)'))]
[  227.640812] cloud-init[1313]: 2020-03-31 09:54:34,225 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds
[  227.651134] cloud-init[1313]: 2020-03-31 09:54:34,236 - 
url_helper.py[WARNING]: Calling 
'http://10.19.48.2/latest/meta-data/instance-id' failed [0/120s]: request error 
[('Connection aborted.', error(111, 'Connection refused'))]
[  228.655226] cloud-init[1313]: 2020-03-31 09:54:35,240 - 
url_helper.py[WARNING]: Calling 
'http://10.19.48.2/latest/meta-data/instance-id' failed [1/120s]: request error 
[('Connection aborted.', error(111, 'Connection refused'))]

In this Stein environment, isolated metadata is enabled, and the qdhcp
namespace has a listener at 169.254.169.254:80. Previous versions of
Neutron had the listener on 0.0.0.0:80, which helped facilitate the
fallback mechanism described above. The bug/patch where this was changed
is here:

[1] https://bugs.launchpad.net/neutron/+bug/1745618

Having this functionality back would be nice. Thoughts?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870228

Title:
  cloud-init metadata fallback broken

Status in neutron:
  New

Bug description:
  I came across an issue today for a user that was experiencing issues
  connecting to metadata at 169.254.169.254. For a long time, cloud-init
  has had a fallback mechanism to that allowed it to contact the
  metadata service at http:///latest/meta-data if
  http://169.254.169.254/latest/meta-data were unavailable, like so:

  [  157.574921] cloud-init[1313]: 2020-03-31 09:53:24,158 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries 
exceeded with url: /2009-04-04/meta-data/instance-id (Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  208.629083] cloud-init[1313]: 2020-03-31 09:54:15,214 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries 
exceeded with url: /2009-04-04/meta-data/instance-id (Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  226.639267] cloud-init[1313]: 2020-03-31 09:54:33,224 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries 
exceeded with url: /2009-04-04/meta-data/instance-id (Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=17.0)'))]
  [  227.640812] cloud-init[1313]: 2020-03-31 09:54:34,225 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds
  [  227.651134] cloud-init[1313]: 2020-03-31 09:54:34,236 - 
url_helper.py[WARNING]: Calling 
'http://10.19.48.2/latest/meta-data/instance-id' failed [0/120s]: request error 
[('Connection aborted.', error(111, 'Connection refused'))]
  [  228.655226] cloud-init[1313]: 2020-03-31 09:54:35,240 - 
ur

[Yahoo-eng-team] [Bug 1870226] [NEW] os-security-groups API policy is allowed for everyone even policy defaults is admin_or_owner

2020-04-01 Thread Ghanshyam Mann
Public bug reported:

os-security-groups restore server API policy is default to
admin_or_owner[1] but API is allowed for everyone.

We can see the test trying with other project context can access the API
- https://review.opendev.org/#/c/716779/

This is because API does not pass the server project_id in policy target
- 
https://github.com/openstack/nova/blob/7b51647f17c88c7c1ae321c59ab8a98c586d4b67/nova/api/openstack/compute/security_groups.py#L427

and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
- 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

[1]
- 
https://github.com/openstack/nova/blob/7b51647f17c88c7c1ae321c59ab8a98c586d4b67/nova/policies/security_groups.py#L27

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1870226

Title:
  os-security-groups API policy is allowed for everyone even policy
  defaults is admin_or_owner

Status in OpenStack Compute (nova):
  New

Bug description:
  os-security-groups restore server API policy is default to
  admin_or_owner[1] but API is allowed for everyone.

  We can see the test trying with other project context can access the API
  - https://review.opendev.org/#/c/716779/

  This is because API does not pass the server project_id in policy target
  - 
https://github.com/openstack/nova/blob/7b51647f17c88c7c1ae321c59ab8a98c586d4b67/nova/api/openstack/compute/security_groups.py#L427

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  [1]
  - 
https://github.com/openstack/nova/blob/7b51647f17c88c7c1ae321c59ab8a98c586d4b67/nova/policies/security_groups.py#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1870226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870225] [NEW] ssh-key or random password

2020-04-01 Thread Dimitri John Ledkov
Public bug reported:

ssh-key or random password


I'd like to setup my user with an ssh key, if there is one in metadata and no 
password auth / locked account (cannot ssh with passord, cannot login on tty 
with password).

If no ssh key available, I want user with random password, and tty login
available, and password based ssh available.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1870225

Title:
  ssh-key or random password

Status in cloud-init:
  New

Bug description:
  ssh-key or random password

  
  I'd like to setup my user with an ssh key, if there is one in metadata and no 
password auth / locked account (cannot ssh with passord, cannot login on tty 
with password).

  If no ssh key available, I want user with random password, and tty
  login available, and password based ssh available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1870225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869862] Re: neutron-tempest-plugin-designate-scenario failes frequently with imageservice doesn't have supported version

2020-04-01 Thread Brian Haley
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869862

Title:
  neutron-tempest-plugin-designate-scenario failes frequently with
  imageservice doesn't have supported version

Status in neutron:
  Invalid

Bug description:
  neutron-tempest-plugin-designate-scenario job fails frequently with the 
following error:
  ...
  2020-03-30 18:49:44.170062 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/functions.sh:overridden_upload_image:256
 :   openstack --os-cloud=devstack-admin --os-region-name=RegionOne image 
create ubuntu-16.04-server-cloudimg-amd64-disk1 --property hw_rng_model=virtio 
--public --container-format=bare --disk-format qcow2
  2020-03-30 18:49:46.242923 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
  2020-03-30 18:49:46.247351 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
  2020-03-30 18:49:46.247894 | controller | The image service for 
devstack-admin:RegionOne exists but does not have any supported versions.
  2020-03-30 18:49:46.384047 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/customize_image.sh:upload_image:1 :  
 exit_trap
  ...

  Example is here:
  
https://94d5d118ec3db75721c2-a00e37315b6784119b950c4b112ef30c.ssl.cf2.rackcdn.com/711610/13/check/neutron-tempest-plugin-designate-scenario/b23bb46/job-output.txt

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22The%20image%20service%20for%20devstack-admin%3ARegionOne%20exists%20but%20does%20not%20have%20any%20supported%20versions.%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1869862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869543] Re: GET limits API policy is allowed for everyone but policy defaults is admin_or_owner

2020-04-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/715672
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4d37ffc111ae8bb43bd33fe995bc3686b065131b
Submitter: Zuul
Branch:master

commit 4d37ffc111ae8bb43bd33fe995bc3686b065131b
Author: Ghanshyam Mann 
Date:   Sat Mar 28 21:35:59 2020 -0500

Correct limits policy check_str

limits API policy is default to admin_or_owner[1]
but API is allowed (which is expected) for everyone.

This is because API does not pass the project_id in policy
target so that oslo policy can decide the ownership[2]. If no
target is passed then, policy.py add the default targets which
is nothing but context.project_id (allow for everyone try to access)
- 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

There is no owner things in limits and every projects can get
its own limits. We need to make default to RULE_ANY which means
allowed to everyone.

[1] 
https://github.com/openstack/nova/blob/403fc671a6877889d6fb70360e002d9b22b98fc9/nova/policies/limits.py#L27
Closes-bug: #1869543

Change-Id: I80617e57a6e062e6038e1b3447e116a5f9e23d24


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1869543

Title:
  GET limits API policy is allowed for everyone but policy defaults is
  admin_or_owner

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  limits API policy is allowed for everyone but policy is default to
  admin_or_owner[1].

  This is because API does not pass the project_id in policy target so that 
oslo policy can decide the ownership.
  
https://github.com/openstack/nova/blob/403fc671a6877889d6fb70360e002d9b22b98fc9/nova/api/openstack/compute/limits.py#L77

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  There is no owner things in limits and every projects can get its own
  limits. We need to make default to RULE_ANY which means allowed to
  everyone.

  [1]
  - 
https://github.com/openstack/nova/blob/403fc671a6877889d6fb70360e002d9b22b98fc9/nova/policies/limits.py#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1869543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1861460] Re: cloud-init should parse initramfs rendered netplan if present

2020-04-01 Thread Dimitri John Ledkov
maybe casper can hack this

** Also affects: casper (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1861460

Title:
  cloud-init should parse initramfs rendered netplan if present

Status in cloud-init:
  In Progress
Status in Ubuntu on IBM z Systems:
  In Progress
Status in casper package in Ubuntu:
  New

Bug description:
  initramfs-tools used to only execute klibc based networking with some
  resolvconf hooks.

  In recent releases, it has been greatly improved to use
  isc-dhcp-client instead of klibc, support vlan= key (like in
  dracut-network), bring up Z devices using chzdev, and generate netplan
  yaml from all of the above.

  Above improvements were driven in part by Oracle Cloud and in part by
  Subiquity netbooting on Z.

  Thus these days, instead of trying to reparse klibc files in
  /run/net-*, cloud-init should simply import /run/netplan/$device.yaml
  files as the ip=* provided networking information on the command line.
  I do not currently see cloud-init doing that in e.g.
  /cloudinit/net/cmdline.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1861460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864548] Re: There's no reason the ovn l3 plugin should create its own ovsdb connections

2020-04-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/708985
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d92e71c297d84f767dfb9b2485e64853decce873
Submitter: Zuul
Branch:master

commit d92e71c297d84f767dfb9b2485e64853decce873
Author: Terry Wilson 
Date:   Thu Feb 20 22:26:44 2020 +

Use OVN mech driver OVSDB connections for l3 plugin

It is possible to re-use the mech driver ovsdb connections in the
ovn l3 plugin, saving the overhead of two db connections/in-memory
copies of the db per process.

Closes-Bug: #1864548
Change-Id: I022dea485f42cf76c4cec67ee43eed9a3770ec9c


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864548

Title:
  There's no reason the ovn l3 plugin should create its own ovsdb
  connections

Status in neutron:
  Fix Released

Bug description:
  The l3 ovn plugin creates it's own NB/SB OVSDB connections when it
  could just use the ones already created by the mech driver. No reason
  to maintain two in-memory copies of each DB for each process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870205] [NEW] Missing PCRE libraries in binary dependency checking (bindep.txt) for Debian/Ubuntu platform

2020-04-01 Thread Khuong Luu
Public bug reported:

Background information
--

I was using `tox -e docs` to build the documentation locally on a fresh
Ubuntu 18.04 LTS.

$ uname -a
Linux kitchen 5.3.0-45-generic #37~18.04.1-Ubuntu SMP Fri Mar 27 15:58:10 UTC 
2020 x86_64 x86_64 x86_64 GNU/Linux

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 18.04.4 LTS
Release:18.04
Codename:   bionic

Reproduce steps


$ tox -e bindep
returns succeeded

but

$ tox -e docs
fails, indicating "fatal error: pcre.h: No such file or directory"
(see the attached full logs below)

so I installed the needed binary, which I found to be `libpcre3-dev`

$ apt install libpcre3-dev

then

$ tox -e bindep
returns succeeded

$ tox -e docs
returns succeeded

So apparently, `libpcre3-dev` is required, at least for Debian/Ubuntu
from 18.04.

If this bug is confirmed on this platform and on Ubuntu 16.04, Debian 8,
Debian 9, `libpcre3-dev` should be added to the `bindep.txt` file.

** Affects: glance
 Importance: Undecided
 Assignee: Khuong Luu (organic-doge)
 Status: New

** Attachment added: "All commands and their detailed outputs, separated by 
multiples empty lines"
   
https://bugs.launchpad.net/bugs/1870205/+attachment/5344388/+files/full_commands_and_outputs.log

** Changed in: glance
 Assignee: (unassigned) => Khuong Luu (organic-doge)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1870205

Title:
  Missing PCRE libraries in binary dependency checking (bindep.txt) for
  Debian/Ubuntu platform

Status in Glance:
  New

Bug description:
  Background information
  --

  I was using `tox -e docs` to build the documentation locally on a
  fresh Ubuntu 18.04 LTS.

  $ uname -a
  Linux kitchen 5.3.0-45-generic #37~18.04.1-Ubuntu SMP Fri Mar 27 15:58:10 UTC 
2020 x86_64 x86_64 x86_64 GNU/Linux

  $ lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 18.04.4 LTS
  Release:  18.04
  Codename: bionic

  Reproduce steps
  

  $ tox -e bindep
  returns succeeded

  but

  $ tox -e docs
  fails, indicating "fatal error: pcre.h: No such file or directory"
  (see the attached full logs below)

  so I installed the needed binary, which I found to be `libpcre3-dev`

  $ apt install libpcre3-dev

  then

  $ tox -e bindep
  returns succeeded

  $ tox -e docs
  returns succeeded

  So apparently, `libpcre3-dev` is required, at least for Debian/Ubuntu
  from 18.04.

  If this bug is confirmed on this platform and on Ubuntu 16.04, Debian
  8, Debian 9, `libpcre3-dev` should be added to the `bindep.txt` file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1870205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1801111] Re: Incorrect link in Identity API v3 extensions (CURRENT) in Identity API Reference

2020-04-01 Thread Vishakha Agarwal
The example is removed in [1]. Thus marking this invalid.

[1] https://review.opendev.org/#/c/672979/

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/180

Title:
  Incorrect link in Identity API v3 extensions (CURRENT) in Identity API
  Reference

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  In API method https://developer.openstack.org/api-
  ref/identity/v3-ext/?expanded=request-an-unscoped-os-federation-token-
  detail#request-an-unscoped-os-federation-token link "Various OpenStack
  token responses" returns 404.

  ---
  Release: v3.11 on 'Thu Nov 1 02:39:46 2018, commit efd67a0'
  SHA: 
  Source: Can't derive source file URL
  URL: https://developer.openstack.org/api-ref/identity/v3-ext/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/180/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869841] Re: unpause server API policy is allowed for everyone even policy defaults is admin_or_owner

2020-04-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/716165
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=cd0b96176ac8e51a88fc6f388b31f3758089d87c
Submitter: Zuul
Branch:master

commit cd0b96176ac8e51a88fc6f388b31f3758089d87c
Author: Ghanshyam Mann 
Date:   Tue Mar 31 01:28:09 2020 -0500

Fix unpause server policy to be admin_or_owner

unpause server API policy is default to admin_or_owner[1] but API
is allowed for everyone.

We can see the test trying with other project context can access the API
- https://review.opendev.org/#/c/716161/

This is because API does not pass the server project_id in policy target[2]
and if no target is passed then, policy.py add the default targets which is
nothing but context.project_id (allow for everyone who try to access)[3]

This commit fix this policy by passing the server's project_id in policy
target.

Closes-bug: #1869841
Partial implement blueprint policy-defaults-refresh

[1]
- 
https://github.com/openstack/nova/blob/eb6bd04e4c27c70b5239dbe7c17607b37f4e87dd/nova/policies/pause_server.py#L38
[2]
- 
https://github.com/openstack/nova/blob/eb6bd04e4c27c70b5239dbe7c17607b37f4e87dd/nova/api/openstack/compute/pause_server.py#L58
[3]
- 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

Change-Id: Iacfaec63eb380863657b44c7f5ff14f6209e3857


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1869841

Title:
  unpause server API policy is allowed for everyone even policy defaults
  is admin_or_owner

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  unpause server API policy is default to admin_or_owner[1] but API is
  allowed for everyone.

  We can see the test trying with other project context can access the API
  - https://review.opendev.org/#/c/716161//1

  This is because API does not pass the server project_id in policy target
  - 
https://github.com/openstack/nova/blob/eb6bd04e4c27c70b5239dbe7c17607b37f4e87dd/nova/api/openstack/compute/pause_server.py#L58

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  [1]
  - 
https://github.com/openstack/nova/blob/eb6bd04e4c27c70b5239dbe7c17607b37f4e87dd/nova/policies/pause_server.py#L38

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1869841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1850087] Re: keystone: token replaced at auth_context middleware

2020-04-01 Thread Colleen Murphy
** Changed in: keystone
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1850087

Title:
  keystone: token replaced at auth_context middleware

Status in OpenStack Identity (keystone):
  New

Bug description:
  Related bug:
  https://bugs.launchpad.net/keystone/+bug/1819036
  Related commit:
  
https://opendev.org/openstack/keystone/commit/a0e9efae720e4afb41c99f5b41933d62512825cd

  The fix for bug 1819036 does elevate the performance by
  reducing the validation of token to only once, but that
  fix caches token in the AuthContextMiddleware, which will
  cause some "race conditions" during the handle of request:
    New request arrives at the time of handling current
    request, then, new token will be cached in the middleware
    and current request's token is replaced.

  I think it's because the middleware only initiates once
  at the startup of keystone, and every request use the same
  instance of that middleware class.

  Env:
  On stein:
  python-keystone-14.0.0
  openstack-keystone-14.0.0
  keystonemiddleware-5.3.0

  
  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1850087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869791] Re: unlock server API policy is allowed for everyone even policy defaults is admin_or_owner

2020-04-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/716071
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=472a9d45038d0152fd8c179e078572b187183528
Submitter: Zuul
Branch:master

commit 472a9d45038d0152fd8c179e078572b187183528
Author: Ghanshyam Mann 
Date:   Mon Mar 30 10:04:52 2020 -0500

Fix unlock server policy to be admin_or_owner

unlock server API policy is default to admin_or_owner[1] but API
is allowed for everyone.

We can see the test trying with other project context can access the API
- https://review.opendev.org/#/c/716057/

This is because API does not pass the server project_id in policy target[2]
and if no target is passed then, policy.py add the default targets which is
nothing but context.project_id (allow for everyone who try to access)[3]

This commit fix this policy by passing the server's project_id in policy
target.

Closes-bug: #1869791
Partial implement blueprint policy-defaults-refresh

[1] 
https://github.com/openstack/nova/blob/7b51647f17c88c7c1ae321c59ab8a98c586d4b67/nova/policies/lock_server.py#L38
[2] 
https://github.com/openstack/nova/blob/a534ccc5a7dbe687277d9233883f500ec635fe04/nova/api/openstack/compute/lock_server.py#L46
[3] 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

Change-Id: I84dd0edd89d9c9d58f3136d90becc07d44c9b39a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1869791

Title:
  unlock server API policy is allowed for everyone even policy defaults
  is admin_or_owner

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  unlock server API policy is default to admin_or_owner[1] but API is
  allowed for everyone.

  We can see the test trying with other project context can access the API
  - https://review.opendev.org/#/c/716057/

  This is because API does not pass the server project_id in policy target
  - 
https://github.com/openstack/nova/blob/a534ccc5a7dbe687277d9233883f500ec635fe04/nova/api/openstack/compute/lock_server.py#L46

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  [1]
  - 
https://github.com/openstack/nova/blob/7b51647f17c88c7c1ae321c59ab8a98c586d4b67/nova/policies/lock_server.py#L38

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1869791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867840] Re: os-flavor-access API policy should be admin only

2020-04-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/713697
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=51abb44ee7125f52f4c7be47473402107b1f7e05
Submitter: Zuul
Branch:master

commit 51abb44ee7125f52f4c7be47473402107b1f7e05
Author: Ghanshyam Mann 
Date:   Wed Mar 18 06:56:05 2020 -0500

Add new default roles in os-flavor-access policies

This adds new defaults roles in os-flavor-access API policies.
This policy is default to SYSTEM_ADMIN role for add/remove
tenant access and SYSTEM_READER for list the access information.

Also add tests to simulates the future where we drop the deprecation
fall back in the policy by overriding the rules with a version where
there are no deprecated rule options. Operators can do the same by
adding overrides in their policy files that match the default but
stop the rule deprecation fallback from happening.

Partial implement blueprint policy-defaults-refresh

Closes-Bug: #1867840

Change-Id: Ieeaafe923b78f03ddcbec18d8759aa1d76bcfcb1


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1867840

Title:
  os-flavor-access  API policy should be admin only

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  os-flavor-access API policy is default to admin_or_owner[1] but API is
  allowed for everyone.

  This is because API does not pass the server project_id in policy target
  - 
https://github.com/openstack/nova/blob/96f6622316993fb41f4c5f37852d4c879c9716a5/nova/api/openstack/compute/flavor_access.py#L45

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  I do not think there is owner things for flavor as multiple tenant can
  be added to access the flavor. I think we should default this policy
  to admin only and admin only should be able to list all the tenants
  who has access to specific flavor.

  [1]
  - 
https://github.com/openstack/nova/blob/96f6622316993fb41f4c5f37852d4c879c9716a5/nova/policies/flavor_access.py#L49

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1867840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1018253] Re: No error message prompt during attaching when mountpoint is occupied

2020-04-01 Thread Simon O'Donovan
CONFIRMED FOR: USSURI

** Changed in: nova
   Status: Expired => New

** Changed in: horizon
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1018253

Title:
  No error message prompt during attaching when mountpoint is occupied

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Correct me if I am wrong.
  When we attach a volume to an instance at the mountpoint /dev/vdb, I expect 
that there should be a error message prompt in horizon if /dev/vdb is already 
occupied by, for example, another instance. Currently there is no error message 
prompt.

  How to create this bug:
  1, Launch one instance.
  2. Create a first volume and a second volume.
  3. Attach the first volume to the instance at the mountpoint /dev/vdb  and 
succeed.
  3. Attach the second volume to the same instance at the same mountpoint 
/dev/vdb.

  Expected output:
  A message should tell user that the mountpoint is occupied, not available or 
something.

  Actual output:
  No message shows. The second volume is still available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1018253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870096] Re: soft-affinity weight not normalized base on server group's maximum

2020-04-01 Thread Balazs Gibizer
** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Tags added: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1870096

Title:
  soft-affinity weight not normalized base on server group's maximum

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  New

Bug description:
  Description
  ===

  When using soft-affinity to schedule instances on the same host, the
  weight is unexpectedly low if a server was previously scheduled to any
  server-group with more members on a host. This low weight can then be
  easily outweighed by differences in resources (e.g. RAM/CPU).

  Steps to reproduce
  ==

  Do not restart nova-scheduler in the process or the bug doesn't
  appear. You need to change the ServerGroupSoftAffinityWeigher to
  actually log the weights it computes to see the problem.

  * Create a server-group with soft-affinity (let's call it A)
  * Create 6 servers in server-group A, one after the other so they end up on 
the same host.
  * Create another server-group with soft-affinity (B)
  * Create 1 server in server-group B
  * Create 1 server in server-group B and look at the scheduler's weights 
assigned to the hosts by the ServerGroupSoftAffinityWeigher.

  Expected result
  ===

  The weight assigned to the host by the ServerGroupSoftAffinityWeigher
  should be 1, as the maximum number of instances for server-group B is
  on that host (the one we created there before).

  Actual result
  =
  The weight assigned to the host by the ServerGroupSoftAffinityWeigher is 0.2, 
as the maximum number of instances ever encountered on a host is 5.

  Environment
  ===

  We noticed this on a queens version of nova a year ago. Can't give the
  exact commit anymore, but the code still looks broken in current
  master.

  I've opened a review-request for fixing this bug here:
  https://review.opendev.org/#/c/713863/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1870096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869929] Re: RuntimeError: maximum recursion depth exceeded while calling a Python object

2020-04-01 Thread Tobias Urdin
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1869929

Title:
  RuntimeError: maximum recursion depth exceeded while calling a Python
  object

Status in OpenStack Compute (nova):
  New
Status in oslo.config:
  New

Bug description:
  When testing upgrading nova packages from Rocky to Train the following
  issue occurs:

  versions:
  oslo.config 6.11.2
  oslo.concurrency 3.30.0
  oslo.versionedobjects 1.36.1
  oslo.db 5.0.2
  oslo.config 6.11.2
  oslo.cache 1.37.0

  Happens here 
https://github.com/openstack/oslo.db/blob/5.0.2/oslo_db/api.py#L304
  where it register_opts for options.database_opts

  This cmp operation:
  https://github.com/openstack/oslo.config/blob/6.11.2/oslo_config/cfg.py#L363

  If I edit above cmp operation and add print statements before like this:
  if opt.dest in opts:
  print('left: %s' % str(opts[opt.dest]['opt'].name))
  print('right: %s' % str(opt.name))
  if opts[opt.dest]['opt'] != opt:
  raise DuplicateOptError(opt.name)

  It stops here:
  $ nova-compute --help
  left: sqlite_synchronous
  right: sqlite_synchronous
  Traceback (most recent call last):
  same exception
  RuntimeError: maximum recursion depth exceeded while calling a Python object

  
  /usr/bin/nova-compute --help
  Traceback (most recent call last):
File "/usr/bin/nova-compute", line 6, in 
  from nova.cmd.compute import main
File "/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 29, in 

  from nova.compute import rpcapi as compute_rpcapi
File "/usr/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 30, in 

  from nova.objects import service as service_obj
File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 170, 
in 
  base.NovaObjectDictCompat):
File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 351, 
in Service
  def _db_service_get_by_compute_host(context, host, use_slave=False):
File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 91, in 
select_db_reader_mode
  return IMPL.select_db_reader_mode(f)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getat

[Yahoo-eng-team] [Bug 1870114] [NEW] Trunk subports aren't treated as dvr serviced ports

2020-04-01 Thread Slawek Kaplonski
Public bug reported:

In case of dvr, for dvr serviced ports, there are openflow rules in br-
int installed to translate port's mac address to gateway mac. It's in
table 1 of br-int and it looks like:

cookie=0xf8f0be9a44e579e7, duration=351.138s, table=1, n_packets=0,
n_bytes=0, idle_age=353, priority=20,dl_vlan=3,dl_dst=fa:16:3e:94:c3:a5
actions=mod_dl_src:fa:16:3e:48:5e:70,resubmit(,60)

But trunk subports aren't included in list of dvr serviced device owners
so traffic from those ports isn't going through those rules and it never
goes out of the br-int.

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: l3-dvr-backlog trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870114

Title:
  Trunk subports aren't treated as dvr serviced ports

Status in neutron:
  New

Bug description:
  In case of dvr, for dvr serviced ports, there are openflow rules in
  br-int installed to translate port's mac address to gateway mac. It's
  in table 1 of br-int and it looks like:

  cookie=0xf8f0be9a44e579e7, duration=351.138s, table=1, n_packets=0,
  n_bytes=0, idle_age=353,
  priority=20,dl_vlan=3,dl_dst=fa:16:3e:94:c3:a5
  actions=mod_dl_src:fa:16:3e:48:5e:70,resubmit(,60)

  But trunk subports aren't included in list of dvr serviced device
  owners so traffic from those ports isn't going through those rules and
  it never goes out of the br-int.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1870114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870110] [NEW] neutron-rally-task fails in rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks

2020-04-01 Thread Bence Romsics
Public bug reported:

It seems we have a gate failure in neutron-rally-task. It fails in
rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks. For
example:

https://zuul.opendev.org/t/openstack/build/9c9970da456d4145a174f73c90529dd2/log/job-output.txt#41274
https://zuul.opendev.org/t/openstack/build/8319cc946cc9407a90467f68757c11e8/log/job-output.txt#41269

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870110

Title:
  neutron-rally-task fails in
  rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks

Status in neutron:
  New

Bug description:
  It seems we have a gate failure in neutron-rally-task. It fails in
  rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks. For
  example:

  
https://zuul.opendev.org/t/openstack/build/9c9970da456d4145a174f73c90529dd2/log/job-output.txt#41274
  
https://zuul.opendev.org/t/openstack/build/8319cc946cc9407a90467f68757c11e8/log/job-output.txt#41269

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1870110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870096] [NEW] soft-affinity weight not normalized base on server group's maximum

2020-04-01 Thread Johannes Kulik
Public bug reported:

Description
===

When using soft-affinity to schedule instances on the same host, the
weight is unexpectedly low if a server was previously scheduled to any
server-group with more members on a host.

Steps to reproduce
==

Do not restart nova-scheduler in the process or the bug doesn't appear.

* Create a server-group with soft-affinity (let's call it A)
* Create 6 servers in server-group A, one after the other so they end up on the 
same host.
* Create another server-group with soft-affinity (B)
* Create 1 server in server-group B
* Create 1 server in server-group B and look at the scheduler's weights 
assigned to the hosts by the ServerGroupSoftAffinityWeigher.

Expected result
===

The weight assigned to the host by the ServerGroupSoftAffinityWeigher
should be 1, as the maximum number of instances for server-group B is on
that host (the one we created there before).

Actual result
=
The weight assigned to the host by the ServerGroupSoftAffinityWeigher is 0.2, 
as the maximum number of instances ever encountered on a host is 5.

Environment
===

We noticed this on a queens version of nova a year ago. Can't give the
exact commit anymore, but the code still looks broken in current master.

I've opened a review-request for fixing this bug here:
https://review.opendev.org/#/c/713863/

** Affects: nova
 Importance: Undecided
 Assignee: Johannes Kulik (jkulik)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1870096

Title:
  soft-affinity weight not normalized base on server group's maximum

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  When using soft-affinity to schedule instances on the same host, the
  weight is unexpectedly low if a server was previously scheduled to any
  server-group with more members on a host.

  Steps to reproduce
  ==

  Do not restart nova-scheduler in the process or the bug doesn't
  appear.

  * Create a server-group with soft-affinity (let's call it A)
  * Create 6 servers in server-group A, one after the other so they end up on 
the same host.
  * Create another server-group with soft-affinity (B)
  * Create 1 server in server-group B
  * Create 1 server in server-group B and look at the scheduler's weights 
assigned to the hosts by the ServerGroupSoftAffinityWeigher.

  Expected result
  ===

  The weight assigned to the host by the ServerGroupSoftAffinityWeigher
  should be 1, as the maximum number of instances for server-group B is
  on that host (the one we created there before).

  Actual result
  =
  The weight assigned to the host by the ServerGroupSoftAffinityWeigher is 0.2, 
as the maximum number of instances ever encountered on a host is 5.

  Environment
  ===

  We noticed this on a queens version of nova a year ago. Can't give the
  exact commit anymore, but the code still looks broken in current
  master.

  I've opened a review-request for fixing this bug here:
  https://review.opendev.org/#/c/713863/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1870096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869708] Re: monasca ui throws an ''AnonymousUser' object has no attribute 'project_id'" error when not logged in

2020-04-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/712794
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e4fd69292c4a8340eba33f5c9d516796472e9269
Submitter: Zuul
Branch:master

commit e4fd69292c4a8340eba33f5c9d516796472e9269
Author: Jacek Tomasiak 
Date:   Thu Mar 12 21:50:49 2020 +0100

Authenticate before Authorization

When user is not logged in and given Dashboard has some `permissions`
defined, `require_perms` decorator was raising `NotAuthorized('You are
not authorized to access %s')` instead of `NotAuthenticated('Please log
in to continue.')`.
This was caused by the order of decorating the views. The decorator
which is applied last is called first in the chain as it wraps the
decorators which were applied before.
This means that to check for authentication before checking permissions
we need to apply the `require_auth` decorator after `require_perms`.

Closes-Bug: 1869708
Change-Id: I94d3fa5c1472bb72c9111cab14c6e05180f88589


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1869708

Title:
  monasca ui throws an ''AnonymousUser' object has no attribute
  'project_id'" error when not logged in

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  # Copy from internal bug tracker
  When accessing the monitoring url while not logged in (or while waited a day 
and trying to access the site again (logged out meanwhile)) in the OpenStack 
dashboard, the site throws an error:


  [Fri Feb 01 11:01:57.218156 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] Internal Server Error: /monitoring/
  [Fri Feb 01 11:01:57.218229 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] Traceback (most recent call last):
  [Fri Feb 01 11:01:57.218239 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/django/core/handlers/exception.py", line 41, 
in inner
  [Fri Feb 01 11:01:57.218257 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] response = get_response(request)
  [Fri Feb 01 11:01:57.218265 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 187, in 
_get_response
  [Fri Feb 01 11:01:57.218273 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] response = self.process_exception_by_middleware(e, 
request)
  [Fri Feb 01 11:01:57.218281 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 227, in 
process_exception_by_middleware
  [Fri Feb 01 11:01:57.218290 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] response = middleware_method(request, exception)
  [Fri Feb 01 11:01:57.218306 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/horizon/middleware/base.py", line 131, in 
process_exception
  [Fri Feb 01 11:01:57.218314 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] status=403)
  [Fri Feb 01 11:01:57.218322 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/django/shortcuts.py", line 30, in render
  [Fri Feb 01 11:01:57.218329 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] content = loader.render_to_string(template_name, 
context, request, using=using)
  [Fri Feb 01 11:01:57.218345 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/django/template/loader.py", line 68, in 
render_to_string
  [Fri Feb 01 11:01:57.218352 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] return template.render(context, request)
  [Fri Feb 01 11:01:57.218370 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/django/template/backends/django.py", line 66, 
in render
  [Fri Feb 01 11:01:57.218378 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] return self.template.render(context)
  [Fri Feb 01 11:01:57.218385 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/django/template/base.py", line 207, in render
  [Fri Feb 01 11:01:57.218392 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] return self._render(context)
  [Fri Feb 01 11:01:57.218400 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/django/template/base.py", line 199, in _render
  [Fri Feb 01 11:01:57.218407 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796] return self.nodelist.render(context)
  [Fri Feb 01 11:01:57.218414 2019] [wsgi:error] [pid 32737] [remote 
10.163.0.83:60796]   File 
"/usr/lib/python2.7/site-packages/django/template/base.py", line