[Yahoo-eng-team] [Bug 1750546] [NEW] neutron-specs sphinx warning treated as error, <1.6.1 req not taken into account

2018-02-20 Thread Thomas Morin
Public bug reported:

The neutron-specs docs is currently failing in master, preventing to
merge anything (and get nice HTML previews in gerrit changes for new
specs):

2018-02-19 13:31:49.216472 | ubuntu-xenial | Warning, treated as error:
2018-02-19 13:31:49.216571 | ubuntu-xenial | 
/home/zuul/src/git.openstack.org/openstack/neutron-specs/doc/source/specs/kilo/ml2-ovs-portsecurity.rst:370:Citation
 [ovs_firewall_driver] is not referenced.

Sphinx 1.6.x has a warning on ununsed reference.

For some reason, sphinx 1.6.5 ends up being used despite the
"sphinx>=1.5.1,<1.6.1" line in requirements.txt:

2018-02-19 13:31:35.564024 | 
2018-02-19 13:31:35.564265 | LOOP [sphinx : Run sphinx]
2018-02-19 13:31:36.535037 | ubuntu-xenial | Running Sphinx v1.6.5

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750546

Title:
  neutron-specs sphinx warning treated as error, <1.6.1 req not taken
  into account

Status in neutron:
  New

Bug description:
  The neutron-specs docs is currently failing in master, preventing to
  merge anything (and get nice HTML previews in gerrit changes for new
  specs):

  2018-02-19 13:31:49.216472 | ubuntu-xenial | Warning, treated as error:
  2018-02-19 13:31:49.216571 | ubuntu-xenial | 
/home/zuul/src/git.openstack.org/openstack/neutron-specs/doc/source/specs/kilo/ml2-ovs-portsecurity.rst:370:Citation
 [ovs_firewall_driver] is not referenced.

  Sphinx 1.6.x has a warning on ununsed reference.

  For some reason, sphinx 1.6.5 ends up being used despite the
  "sphinx>=1.5.1,<1.6.1" line in requirements.txt:

  2018-02-19 13:31:35.564024 | 
  2018-02-19 13:31:35.564265 | LOOP [sphinx : Run sphinx]
  2018-02-19 13:31:36.535037 | ubuntu-xenial | Running Sphinx v1.6.5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747654] Re: VPNaaS: enable sha384/sha512 auth algorithms for *Swan drivers

2018-02-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/541250
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=03b6cc81876df2423c17532b8f2e0ef2bbb6a84b
Submitter: Zuul
Branch:master

commit 03b6cc81876df2423c17532b8f2e0ef2bbb6a84b
Author: Hunt Xu 
Date:   Tue Feb 6 18:21:21 2018 +0800

Enable sha384/sha512 auth algorithms for *Swan drivers

Closes-Bug: #1747654
Change-Id: I84d3ac6379bc0b6d483b557f38f3a462f0f1f1bf


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1747654

Title:
  VPNaaS: enable sha384/sha512 auth algorithms for  *Swan drivers

Status in neutron:
  Fix Released

Bug description:
  When adding sha384 and sha512 auth algorithms for vendor drivers(bug
  #1638152), the commit message said "Openswan, Strongswan, Libreswan
  and Cisco CSR driver doesn't support" sha384 and sha512 as auth
  algorithms. However, after some research, all the *Swan drivers do
  support these two algorithms. So it is better to enable sha384/sha512
  with *Swan drivers for security improvements.

  - For StrongSwan, wiki pages back in Mid 2014: [1][2].
  - For LibreSwan, wiki page back in May 2016: [3].
  - For OpenSwan, it is not well documented. However, the code last changed in 
Jan 2014 shows its awareness of these two algorithms: [4]

  [1]. 
https://wiki.strongswan.org/projects/strongswan/wiki/IKEv1CipherSuites/16#Integrity-Algorithms
  [2]. 
https://wiki.strongswan.org/projects/strongswan/wiki/IKEv2CipherSuites/35#Integrity-Algorithms
  [3]. 
https://libreswan.org/wiki/index.php?title=FAQ&oldid=20707#Which_ciphers_.2F_algorithms_does_libreswan_support.3F
  [4]. 
https://github.com/xelerance/Openswan/blob/master/lib/libopenswan/alg_info.c

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1747654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1748658] Re: Restarting Neutron containers which make use of network namespaces doesn't work

2018-02-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/542858
Committed: 
https://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=fb27465c6fb734d7106130765f918ed91bc329ad
Submitter: Zuul
Branch:master

commit fb27465c6fb734d7106130765f918ed91bc329ad
Author: Brent Eagles 
Date:   Fri Feb 9 11:11:00 2018 -0330

Mount netns as shared to persist namespaces

Dataplane is breaking when containers are killed. Keeping the namespaces
around allows the network objects (ports, bridges) etc. to remain
intact without the containers running.

Closes-Bug: #1748658
Change-Id: I092500e9ec0820347ba0f865f3c24f828980af3a


** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1748658

Title:
  Restarting Neutron containers which make use of network namespaces
  doesn't work

Status in neutron:
  New
Status in tripleo:
  Fix Released

Bug description:
  When DHCP, L3, Metadata or OVN-Metadata containers are restarted they can't
  set the previous namespaces:

  
  [heat-admin@overcloud-novacompute-0 neutron]$ sudo docker restart 8559f5a7fa45
  8559f5a7fa45


  [heat-admin@overcloud-novacompute-0 neutron]$ tail -f 
/var/log/containers/neutron/networking-ovn-metadata-agent.log 
  2018-02-09 08:34:41.059 5 CRITICAL neutron [-] Unhandled error: 
ProcessExecutionError: Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK 
answers: Invalid argument
  2018-02-09 08:34:41.059 5 ERROR neutron Traceback (most recent call last):
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/bin/networking-ovn-metadata-agent", line 10, in 
  2018-02-09 08:34:41.059 5 ERROR neutron sys.exit(main())
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/cmd/eventlet/agents/metadata.py",
 line 17, in main
  2018-02-09 08:34:41.059 5 ERROR neutron metadata_agent.main()
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata_agent.py", line 
38, in main
  2018-02-09 08:34:41.059 5 ERROR neutron agt.start()
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
147, in start
  2018-02-09 08:34:41.059 5 ERROR neutron self.sync()
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
56, in wrapped
  2018-02-09 08:34:41.059 5 ERROR neutron return f(*args, **kwargs)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
169, in sync
  2018-02-09 08:34:41.059 5 ERROR neutron metadata_namespaces = 
self.ensure_all_networks_provisioned()
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
350, in ensure_all_networks_provisioned
  2018-02-09 08:34:41.059 5 ERROR neutron netns = 
self.provision_datapath(datapath)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
294, in provision_datapath
  2018-02-09 08:34:41.059 5 ERROR neutron veth_name[0], veth_name[1], 
namespace)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 182, in 
add_veth
  2018-02-09 08:34:41.059 5 ERROR neutron self._as_root([], 'link', 
tuple(args))
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 94, in 
_as_root
  2018-02-09 08:34:41.059 5 ERROR neutron namespace=namespace)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 102, in 
_execute
  2018-02-09 08:34:41.059 5 ERROR neutron 
log_fail_as_error=self.log_fail_as_error)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 151, in 
execute
  2018-02-09 08:34:41.059 5 ERROR neutron raise ProcessExecutionError(msg, 
returncode=returncode)
  2018-02-09 08:34:41.059 5 ERROR neutron ProcessExecutionError: Exit code: 2; 
Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Invalid argument
  2018-02-09 08:34:41.059 5 ERROR neutron 
  2018-02-09 08:34:41.059 5 ERROR neutron 
  2018-02-09 08:34:41.177 21 INFO oslo_service.service [-] Parent process has 
died unexpectedly, exiting
  2018-02-09 08:34:41.178 21 INFO eventlet.wsgi.server [-] (21) wsgi exited, 
is_accepting=True

  
  An easy way to reproduce the bug:

  [heat-admin@overcloud-novacompute-0 ~]$ sudo docker exec -u root -it
  5c5f254a9321bd74b5911f46acb9513574c2cd9a3c59805a85cffd960bcc864d
  /bin/bash

  [root@overcloud-novacompute-0 /]# ip netns a my_netns
  [root@overcloud-novacompute

[Yahoo-eng-team] [Bug 1750555] [NEW] Revisit database rolling upgrade documentation

2018-02-20 Thread Abhishek Kekane
Public bug reported:

Since db_sync is now internally using EMC pattern we need to revisit the
entire database rolling upgrades documentation.

** Affects: glance
 Importance: High
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Abhishek Kekane (abhishek-kekane)

** Changed in: glance
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1750555

Title:
  Revisit database rolling upgrade documentation

Status in Glance:
  New

Bug description:
  Since db_sync is now internally using EMC pattern we need to revisit
  the entire database rolling upgrades documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1750555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750563] [NEW] IP address isn't assigned to a subnet gateway interface in some cases.

2018-02-20 Thread Anton Kremenetsky
Public bug reported:

Hello everyone,

Seems I caught a race condition bug in neutron-l3-agent.
We have automated tests. One of the test performs the following scenario. 
Creates different resources such as network, subnet and so on. Then the test 
connects the subnet to a router and perform other things that are not related 
to this bug. The test is performed in a cycle with different parameters but we 
use the same parameters for the Neutron resources. I mean the test always 
creates subnet with the same CIDR 192.168.0.0/24 and the subnet gateway 
interface gets 192.168.0.1 IP address. The bug happens in the moment when the 
subnet is connecting to the router. I would like to note that is not a  
permanent bug, sometimes it happens but sometimes not. 
So bug looks like you don't access to the instances(VMs) using floating IPs. 
It's not possible to ping them. I did some debug, it turned out the subnet 
gateway interface didn't get an IP sometimes. For example, when the bug happens 
the interface looks so

root@network-N6-rmfqne:/var/log/neutron# sudo /usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qrouter-a35f384b-549e-41c6-8076-2283be384e1b ip a
...
389: qr-7dc17e0a-97@if786:  mtu 1450 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:1e:47:78 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::f816:3eff:fe1e:4778/64 scope link
   valid_lft forever preferred_lft forever


For a success case it looks so.

root@network-N6-rmfqne:/var/log/neutron# sudo /usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qrouter-a35f384b-549e-41c6-8076-2283be384e1b ip a
...
393: qr-cccf794e-86@if794:  mtu 1450 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:54:0c:11 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.1/24 scope global qr-cccf794e-86
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe54:c11/64 scope link
   valid_lft forever preferred_lft forever

We are using Juju to deploy OpenStack. The version of the neutron-
gateway 9.4.1, the version of the charm is 244.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750563

Title:
  IP address isn't assigned to a subnet gateway interface in some cases.

Status in neutron:
  New

Bug description:
  Hello everyone,

  Seems I caught a race condition bug in neutron-l3-agent.
  We have automated tests. One of the test performs the following scenario. 
Creates different resources such as network, subnet and so on. Then the test 
connects the subnet to a router and perform other things that are not related 
to this bug. The test is performed in a cycle with different parameters but we 
use the same parameters for the Neutron resources. I mean the test always 
creates subnet with the same CIDR 192.168.0.0/24 and the subnet gateway 
interface gets 192.168.0.1 IP address. The bug happens in the moment when the 
subnet is connecting to the router. I would like to note that is not a  
permanent bug, sometimes it happens but sometimes not. 
  So bug looks like you don't access to the instances(VMs) using floating IPs. 
It's not possible to ping them. I did some debug, it turned out the subnet 
gateway interface didn't get an IP sometimes. For example, when the bug happens 
the interface looks so

  root@network-N6-rmfqne:/var/log/neutron# sudo /usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qrouter-a35f384b-549e-41c6-8076-2283be384e1b ip a
  ...
  389: qr-7dc17e0a-97@if786:  mtu 1450 qdisc 
noqueue state UP group default qlen 1000
  link/ether fa:16:3e:1e:47:78 brd ff:ff:ff:ff:ff:ff link-netnsid 0
  inet6 fe80::f816:3eff:fe1e:4778/64 scope link
 valid_lft forever preferred_lft forever

  
  For a success case it looks so.

  root@network-N6-rmfqne:/var/log/neutron# sudo /usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qrouter-a35f384b-549e-41c6-8076-2283be384e1b ip a
  ...
  393: qr-cccf794e-86@if794:  mtu 1450 qdisc 
noqueue state UP group default qlen 1000
  link/ether fa:16:3e:54:0c:11 brd ff:ff:ff:ff:ff:ff link-netnsid 0
  inet 192.168.0.1/24 scope global qr-cccf794e-86
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe54:c11/64 scope link
 valid_lft forever preferred_lft forever

  We are using Juju to deploy OpenStack. The version of the neutron-
  gateway 9.4.1, the version of the charm is 244.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750562] [NEW] Dynamic routing: Docs for speaker scheduling are incorrect

2018-02-20 Thread Dr. Jens Harbott
Public bug reported:

According to https://docs.openstack.org/neutron-dynamic-
routing/latest/contributor/testing.html no automatic scheduling of BGP
speakers to dynamic routing agents is happening. However at least of
Pike this is no longer true: The first speaker that is being created is
automatically scheduled to the first available agent.

** Affects: neutron
 Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750562

Title:
  Dynamic routing: Docs for speaker scheduling are incorrect

Status in neutron:
  In Progress

Bug description:
  According to https://docs.openstack.org/neutron-dynamic-
  routing/latest/contributor/testing.html no automatic scheduling of BGP
  speakers to dynamic routing agents is happening. However at least of
  Pike this is no longer true: The first speaker that is being created
  is automatically scheduled to the first available agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750591] [NEW] Admin-deployed qos policy breaks tenant port creation

2018-02-20 Thread Dr. Jens Harbott
Public bug reported:

This is mainly following https://docs.openstack.org/neutron/pike/admin
/config-qos.html, steps to reproduce:

1. Admin creates qos policy "default" in admin project
2. User creates network "mynet" in user project
3. Admin applies qos policy to tenant network via "openstack network set 
--qos-policy default mynet"
4. User tries to create (an instance with) a port in "mynet".

Result: Neutron fails with "Internal server error". q-svc.log shows

2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
[req-f20ed290-5a24-44fe-9b2a-9cc2a6caacbc 8585b4c745184f538091963331dad1c7 
8b039227731847a0b62eddfde3ab17c0 - default default] POST failed.: 
CallbackFailure: Callback
 
neutron.services.qos.qos_plugin.QoSPlugin._validate_create_port_callback--9223372036854470149
 failed with "'NoneType' object has no attribute 'rules'"
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/pecan/core.py", line 678, in __call__
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.invoke_controller(controller, args, kwargs, state)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/pecan/core.py", line 569, in invoke_controller
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
result = controller(*args, **kwargs)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 93, in wrapped
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
setattr(e, '_RETRY_EXCEEDED', True)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 89, in wrapped
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 150, in wrapper
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
ectxt.value = e.inner_exc
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 128, in wrapped
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
LOG.debug("Retry wrapper got retriable exception: %s", e)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 124, in wrapped
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*dup_args, **dup_kwargs)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/pecan_wsgi/controllers/utils.py", 
line 76, in wrapped
2018-02-20 13:45:49.572 

[Yahoo-eng-team] [Bug 1750590] [NEW] space before keyword in /etc/neutron/neutron.conf makes service fail

2018-02-20 Thread daemonna
Public bug reported:

putting space before

  verbose = true

will make

  systemctl status neutron-dhcp-agent.service

to fail.

Removing space will fix it, but this should NOT happen is this is plain
text file (not Yaml or similar) and thus shouldn't make difference if
there is space or not.


Centos 7.3, Openstack 14.1

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750590

Title:
  space before keyword in /etc/neutron/neutron.conf makes service fail

Status in neutron:
  New

Bug description:
  putting space before

verbose = true

  will make

systemctl status neutron-dhcp-agent.service

  to fail.

  Removing space will fix it, but this should NOT happen is this is
  plain text file (not Yaml or similar) and thus shouldn't make
  difference if there is space or not.

  
  Centos 7.3, Openstack 14.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685678] Re: swap volume operation may leak credentials into debug logs

2018-02-20 Thread Matt Riedemann
Actually cinderclient should be fixed:

https://review.openstack.org/#/c/395119/

** No longer affects: python-cinderclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1685678

Title:
  swap volume operation may leak credentials into debug logs

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  The swap volume code in the compute service logs old and new volume
  connection_info dicts to debug here:

  
https://github.com/openstack/nova/blob/1ad41c0b0c844b65424251dbc3399039789b064f/nova/compute/manager.py#L4930

  The new connection_info comes from Cinder:

  
https://github.com/openstack/nova/blob/1ad41c0b0c844b65424251dbc3399039789b064f/nova/compute/manager.py#L4901

  That's a plain dict from the response which may contain credentials.

  The old connection_info comes from the
  nova.objects.block_device.BlockDeviceMapping object, which uses
  SensitiveStringField to sanitize the field's value when logged:

  
https://github.com/openstack/nova/blob/1ad41c0b0c844b65424251dbc3399039789b064f/nova/compute/manager.py#L4904

  
https://github.com/openstack/oslo.versionedobjects/blob/1.23.0/oslo_versionedobjects/fields.py#L280

  The new connection_info could contain credentials though, so we should
  mask those when logging it, even at debug level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1685678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685678] Re: swap volume operation may leak credentials into debug logs

2018-02-20 Thread Matt Riedemann
This is also a problem in python-cinderclient which logs the response
body:

https://github.com/openstack/python-
cinderclient/blob/84346b5dba784bfeb3a53ae83d400ba264263cf6/cinderclient/apiclient/client.py#L141

** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

** Changed in: python-cinderclient
   Status: New => Triaged

** Changed in: python-cinderclient
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1685678

Title:
  swap volume operation may leak credentials into debug logs

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  The swap volume code in the compute service logs old and new volume
  connection_info dicts to debug here:

  
https://github.com/openstack/nova/blob/1ad41c0b0c844b65424251dbc3399039789b064f/nova/compute/manager.py#L4930

  The new connection_info comes from Cinder:

  
https://github.com/openstack/nova/blob/1ad41c0b0c844b65424251dbc3399039789b064f/nova/compute/manager.py#L4901

  That's a plain dict from the response which may contain credentials.

  The old connection_info comes from the
  nova.objects.block_device.BlockDeviceMapping object, which uses
  SensitiveStringField to sanitize the field's value when logged:

  
https://github.com/openstack/nova/blob/1ad41c0b0c844b65424251dbc3399039789b064f/nova/compute/manager.py#L4904

  
https://github.com/openstack/oslo.versionedobjects/blob/1.23.0/oslo_versionedobjects/fields.py#L280

  The new connection_info could contain credentials though, so we should
  mask those when logging it, even at debug level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1685678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1740025] Re: LVM Sparse Volumes broken

2018-02-20 Thread Sylvain Bauza
Honestly, I wonder if LVM sparse LVs worked. Looks like we don't have
any functiona tests for that (just the unittests) and we merged the
blueprint by 5 years ago.

If you also see that comment in
https://github.com/openstack/nova/blob/c1eb6f0/nova/virt/libvirt/imagebackend.py#L688-L689
looks like the sparse LVs should be deprecated.

FWIW, that deprecation comment was also merged 5 years ago, which means
we don't really support that now.

Given also now we have Cinder volumes, it's possible to ask Cinder to
have thin provisioning instead of using LVM sparse LVs by using that
document https://docs.openstack.org/cinder/latest/admin/blockstorage-
over-subscription.html

So, given the above, setting that bug as Wontfix and we will be trying
to really deprecate sparse LVs by the next release.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1740025

Title:
  LVM Sparse Volumes broken

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Description
  ===
  Instance creation with LVM Sparse volumes as root disk is broken.

  Steps to reproduce
  ==
  Configure nova-compute to use LVM with sparse volumes:
  Add to /etc/nova/nova.conf and restart nova-compute:
  [libvirt]
  images_type=lvm
  images_volume_group=
  sparse_logical_volumes=true

  Schedule an instance with an image that is bigger than 64 MB.

  Expected result
  ===
  Instance is created successfully.

  Actual result
  =
  Error while writing the image to the logical volume because the Image is 
bigger than the preallocated 64 MB volume.

  Please see attached log file.

  Environment
  ===
  OpenStack Release: Pike
  Nova Version: 16.0.3 (CentOS 7 RPM)

  Hypervisor Type: Libvirt + KVM

  Storage Type: LVM (Sparse)

  Networking Type (not applicable): Neutron with LinuxBridge

  References
  ==

  This bug was already reported to Mirantis OpenStack:
  https://bugs.launchpad.net/mos/+bug/1591084

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1740025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730845] Re: [RFE] support a port-behind-port API

2018-02-20 Thread Miguel Lavalle
** Changed in: neutron
   Status: Expired => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730845

Title:
  [RFE] support a port-behind-port API

Status in neutron:
  Incomplete

Bug description:
  This RFE requests a unified API for a port-behind-port behaviour. This
  behaviour has a few use-cases:

  * MACVLAN - Identify that a port is behind a port using Allowed
  Address Pairs, and identifying the behaviour based on MAC.

  * HA Proxy behind Amphora - Identify that a port is behind a port
  using Allowed Address Pairs and identifying the behaviour based on IP.

  * Trunk Port (VLAN aware VMs) - Identify that a port is behind a port
  using the Trunk Port API and identifying the behaviour based on VLAN
  tags.

  This RFE proposes to extend the Trunk Port API to support the first
  two use-cases. The rationale is that in an SDN environment, it makes
  more sense to explicitly state the intent, rather than have the
  implementation infer the intent by matching Allowed Address Pairs and
  other existing ports.

  This will allow implementations to handle these use cases in a
  simpler, flexible, and more robust manner than done today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739735] Re: boot baremetal server which has multi-interface will randomly choose baremetal server interface

2018-02-20 Thread Sylvain Bauza
As it was stated above by the Ironic team, they provided a new feature
for deterministicly provide the network ports.

Setting accordingly that bug report as Wontfix.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739735

Title:
  boot baremetal server which has multi-interface will randomly choose
  baremetal server interface

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Description
  ===
  In my ironic environment, my baremetal node has two interface, we will 
register two ports with two macs into ironic. When tenant user boot baremetal 
server with one network, nova will randomly choose the one of the macs, 
baremetal server will randomly use eth0 / eth1 as its interface.


  Steps to reproduce
  ==
  1. register baremetal server with multi interface.
  2. boot a ironic server with one network


  Expected result
  ===
  use eth0 as its interface

  
  Actual result
  =
  some of the ironic nodes will use eth0, but some will choose eth1.

  
  Environment
  ===
  Newton, but this problem also existed in latest master.

  
  Analysis
  ===
  We think the problem is because nova is using set for available_macs in 
allocate_for_instance (nova/network/neutronv2/api.py) . typically, eth0 will 
has small mac compare with eth1.
  So maybe we can use list for available_macs, and use small mac first to 
create port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1739735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1744983] Re: coverage job fails with timeout

2018-02-20 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Confirmed => Fix Released

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1744983

Title:
  coverage job fails with timeout

Status in neutron:
  Fix Released

Bug description:
  It happened in Ocata: http://logs.openstack.org/41/532941/1/gate
  /openstack-tox-cover/5a26283/job-
  output.txt.gz#_2018-01-22_23_39_21_734729

  2018-01-22 23:02:09.046252 | ubuntu-xenial |   load_tests = 
test_base.optimize_db_test_loader(__file__)
  2018-01-22 23:39:21.734729 | RUN END RESULT_TIMED_OUT: [untrusted : 
git.openstack.org/openstack-infra/zuul-jobs/playbooks/tox/run.yaml@master]

  It took at least 37 minutes for coverage report to run.

  In a good run from the same branch, we can see that the same tox run
  took ~18 minutes: http://logs.openstack.org/61/532461/1/gate
  /openstack-tox-cover/2ebf0d6/job-
  output.txt.gz#_2018-01-17_10_27_36_867873

  2018-01-17 10:27:36.867873 | ubuntu-xenial |   load_tests = 
test_base.optimize_db_test_loader(__file__)
  2018-01-17 10:45:02.053109 | ubuntu-xenial | 
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \

  There are no leads in logs. I am also not sure whether the failure
  affects other branches.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1744983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750615] [NEW] The v3 application credential API should account for different scopes

2018-02-20 Thread Lance Bragstad
Public bug reported:

Keystone implemented scope_types for oslo.policy RuleDefault objects in
the Queens release. In order to take full advantage of scope_types,
keystone is going to have to evolve policy enforcement checks in the
user API. This is documented in each patch with FIXMEs [1].

The following acceptance criteria describes how the v3 application
credentials API should behave with tokens from multiple scopes:

GET
/v3/users/{user_id}/application_credentials/{application_credential_id}

- Someone with a system role assignment that passes the check string should be 
able to get any application credential in the system (system-scoped)
- Someone with a domain role assignment that passes the check string should be 
able to get an application credential for a user and project within the domain 
they administer (domain-scoped)
- Someone with a project role assignment that passes the check string should be 
able to get an application credential for a user that has a role assignment on 
the project they administer, the response should be limited to only application 
credentials for the project they administer (project-scoped)
- Someone with a valid token should be able to call this API for an application 
credential associated to their user (project-scoped, user-scoped?)

GET /v3/users/{user_id}/application_credentials

- Someone with a system role assignment that passes the check string should be 
able to list all application credentials for any user in the system 
(system-scoped)
- Someone with a domain role assignment that passes the check string should be 
able to list application credentials for any user in their domain, the response 
should only contain application credentials for projects in the domain they 
administer (domain-scoped)
- Someone with a project role assignment that passes the check string should be 
able to list application credentials for users who have role assignment on the 
project they administer, the response should only return application 
credentials associated to the project they administer? (project-scoped)
- Someone with a valid token should be able to call this API and list all 
application credentials they've created

POST /v3/users/{user_id}/application_credentials

- Someone with a project role assignment should be able to create
application credentials for the project they have a role assignment on
(project-scoped)

DELETE
/v3/users/{user_id}/application_credentials/{application_credential_id}

- Someone with a system role assignment that passes the check string should be 
able to delete any application credential for any user in the system 
(system-scoped)
- Someone with a domain role assignment that passes the check string should be 
able to delete any application credential for a user and project within the 
domain they administer (domain-scoped)
- Someone with a project role assignment that passes the check string should be 
able to delete any application credential for a user that has a role assignment 
on their project and for application credentials associated to the project they 
administer (project-scoped)
- Someone with a valid token should be able to delete any application 
credential they've created (project-scoped, user-scoped?)

[0]
https://github.com/openstack/keystone/blob/68df7bf1f3b3d6ab3f691f59f1ce6de6b0b1deab/keystone/common/policies/application_credential.py#L24-L32

** Affects: keystone
 Importance: High
 Status: Triaged


** Tags: policy

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => High

** Tags added: policy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1750615

Title:
  The v3 application credential API should account for different scopes

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  Keystone implemented scope_types for oslo.policy RuleDefault objects
  in the Queens release. In order to take full advantage of scope_types,
  keystone is going to have to evolve policy enforcement checks in the
  user API. This is documented in each patch with FIXMEs [1].

  The following acceptance criteria describes how the v3 application
  credentials API should behave with tokens from multiple scopes:

  GET
  /v3/users/{user_id}/application_credentials/{application_credential_id}

  - Someone with a system role assignment that passes the check string should 
be able to get any application credential in the system (system-scoped)
  - Someone with a domain role assignment that passes the check string should 
be able to get an application credential for a user and project within the 
domain they administer (domain-scoped)
  - Someone with a project role assignment that passes the check string should 
be able to get an application credential for a user that has a role assignment 
on the project they administer, the response should 

[Yahoo-eng-team] [Bug 1728948] Re: fullstack: test_connectivity fails due to dhclient crash

2018-02-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/545820
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=92959238a3e408e810ccd4f3d3d453a35afb5bba
Submitter: Zuul
Branch:master

commit 92959238a3e408e810ccd4f3d3d453a35afb5bba
Author: Sławek Kapłoński 
Date:   Mon Feb 19 13:28:57 2018 +0100

[Fullstack] Respawn dhclient process in case of error

When dhclient process is started as async process by fake machine resource,
it might happen that "None" will be returned as first line of output
from this process. This is treated as an error and dhclient is halted
immediately.
Because of that fake machine don't have configured IP address and
test fails.

This patch adds "respawn_timeout" value set to 5 seconds for dhclient
async process. When dhclient process is restarted it should works fine
and IP address should be then configured properly.

Change-Id: Ie056578abbe6e18c8415c6e61d755f2248a70541
Closes-Bug: #1728948


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1728948

Title:
  fullstack: test_connectivity fails due to dhclient crash

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/66/515566/7/check/legacy-neutron-dsvm-
  fullstack/92fcb8e/logs/dsvm-fullstack-
  logs/TestOvsConnectivitySameNetwork.test_connectivity_GRE-l2pop-
  arp_responder,openflow-native_ovsdb-
  native_.txt.gz#_2017-10-31_06_52_37_750

  The test fails because ping is not successful probably because
  instances don't get ips:

  http://logs.openstack.org/66/515566/7/check/legacy-neutron-dsvm-
  fullstack/92fcb8e/logs/dsvm-fullstack-
  logs/TestOvsConnectivitySameNetwork.test_connectivity_GRE-l2pop-
  arp_responder,openflow-native_ovsdb-
  native_.txt.gz#_2017-10-31_06_52_18_204

  2017-10-31 06:52:18.204 25430 DEBUG neutron.agent.linux.async_process
  [-] Halting async process [ip netns exec test-42effae2-ba0c-45be-
  95e3-85e7a8b6315f dhclient -sf /opt/stack/new/neutron/.tox/dsvm-
  fullstack/bin/fullstack-dhclient-script --no-pid -d port6b08d0] in
  response to an error. _handle_process_error
  neutron/agent/linux/async_process.py:196

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1728948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750618] [NEW] rebuild to same host with a different image results in erroneously doing a Claim

2018-02-20 Thread Chris Friesen
Public bug reported:

As of stable/pike if we do a rebuild-to-same-node with a new image, it
results in ComputeManager.rebuild_instance() being called with
"scheduled_node=" and "recreate=False".  This results in a new
Claim, which seems wrong since we're not changing the flavor and that
claim could fail if the compute node is already full.

The comments in ComputeManager.rebuild_instance() make it appear that it
expects both "recreate" and "scheduled_node" to be None for the rebuild-
to-same-host case otherwise it will do a Claim.  However, if we rebuild
to a different image it ends up going through the scheduler which means
that "scheduled_node" is not None.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute rebuild

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750618

Title:
  rebuild to same host with a different image results in erroneously
  doing a Claim

Status in OpenStack Compute (nova):
  New

Bug description:
  As of stable/pike if we do a rebuild-to-same-node with a new image, it
  results in ComputeManager.rebuild_instance() being called with
  "scheduled_node=" and "recreate=False".  This results in a
  new Claim, which seems wrong since we're not changing the flavor and
  that claim could fail if the compute node is already full.

  The comments in ComputeManager.rebuild_instance() make it appear that
  it expects both "recreate" and "scheduled_node" to be None for the
  rebuild-to-same-host case otherwise it will do a Claim.  However, if
  we rebuild to a different image it ends up going through the scheduler
  which means that "scheduled_node" is not None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750337] Re: Fullstack tests fail due to "block_until_boot" timeout

2018-02-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/546069
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=465ad6f3197b8591a401a9f0db2fabf6c70fdfce
Submitter: Zuul
Branch:master

commit 465ad6f3197b8591a401a9f0db2fabf6c70fdfce
Author: Sławek Kapłoński 
Date:   Tue Feb 20 09:24:38 2018 +0100

[Fullstack] Limit number of Neutron's api workers

Default number of api workers in Neutron is set to be equal to
number of CPU cores on host. That is fine on production environment
but on fullstack tests, where each test spawns own neutron-server
process it might cause host overload.

This patch limits number of api_workers to 2 which should be enough
for single test case and should make significantly lower load on host.

Change-Id: I1e970e35883d5240f0bd30eaea50313d93900580
Closes-Bug: #1750337


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750337

Title:
  Fullstack tests fail due to "block_until_boot" timeout

Status in neutron:
  Fix Released

Bug description:
  Sometimes in tests like 
"neutron.tests.fullstack.test_connectivity.TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop.test_controller_timeout_does_not_break_connectivity_sigterm(VLANs,openflow-native)"
 there is timeout error during waiting for all vms to be boot.
  Example of such error can be checked e.g. on 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/testr_results.html.gz

  This example is done on patch with some additional logging added to debug 
tests. What is strange there is fact that test environment makes GET 
/v2.0/ports/{port_id} call properly: 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop.test_controller_timeout_does_not_break_connectivity_sigterm_VLANs,openflow-native_.txt.gz#_2018-02-18_20_34_47_950
  but there is no this call logged in neutron-server logs. First GET call for 
this port in neutron-server logs is about 1m 30seconds later: 
http://logs.openstack.org/81/545681/1/check/neutron-fullstack/8285bf3/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop.test_controller_timeout_does_not_break_connectivity_sigterm_VLANs,openflow-native_/neutron-server--2018-02-18--20-31-43-830810.txt.gz#_2018-02-18_20_36_18_516
 and this is already too late as test reached timeout and it is failed.

  Above failed test run is just an example. I saw similar errors more
  times than only this one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750618] Re: rebuild to same host with a different image results in erroneously doing a Claim

2018-02-20 Thread Matt Riedemann
This is a regression introduced with change
I11746d1ea996a0f18b7c54b4c9c21df58cc4714b which was backported all the
way to stable/newton upstream:

https://review.openstack.org/#/q/I11746d1ea996a0f18b7c54b4c9c21df58cc4714b

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: New => Triaged

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => Triaged

** Changed in: nova/pike
   Status: New => Triaged

** Changed in: nova/queens
   Status: New => Triaged

** Changed in: nova/ocata
   Importance: Undecided => High

** Changed in: nova/pike
   Importance: Undecided => High

** Changed in: nova/queens
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750618

Title:
  rebuild to same host with a different image results in erroneously
  doing a Claim

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) ocata series:
  Triaged
Status in OpenStack Compute (nova) pike series:
  Triaged
Status in OpenStack Compute (nova) queens series:
  Triaged

Bug description:
  As of stable/pike if we do a rebuild-to-same-node with a new image, it
  results in ComputeManager.rebuild_instance() being called with
  "scheduled_node=" and "recreate=False".  This results in a
  new Claim, which seems wrong since we're not changing the flavor and
  that claim could fail if the compute node is already full.

  The comments in ComputeManager.rebuild_instance() make it appear that
  it expects both "recreate" and "scheduled_node" to be None for the
  rebuild-to-same-host case otherwise it will do a Claim.  However, if
  we rebuild to a different image it ends up going through the scheduler
  which means that "scheduled_node" is not None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750623] [NEW] rebuild to same host with different image shouldn't check with placement

2018-02-20 Thread Chris Friesen
Public bug reported:

When doing a rebuild-to-same-host but with a different image, all we
really want to do is ensure that the image properties for the new image
are still valid for the current host.  Accordingly we need to go through
the scheduler (to run the image-related filters) but we don't want to do
anything related to resource consumption.

Currently the scheduler will contact placement to get a pre-filtered
list of compute nodes with sufficient free resources for the instance in
question.  If the instance is on a compute node that is close to full,
this may result in the current compute node being filtered out of the
list, which will result in a noValidHost exception.

Ideally, in the case where we are doing a rebuild-to-same-host we would
simply retrieve the information for the current compute node from the DB
instead of from placement, and then run the image-related scheduler
filters.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: rebuild scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750623

Title:
  rebuild to same host with different image shouldn't check with
  placement

Status in OpenStack Compute (nova):
  New

Bug description:
  When doing a rebuild-to-same-host but with a different image, all we
  really want to do is ensure that the image properties for the new
  image are still valid for the current host.  Accordingly we need to go
  through the scheduler (to run the image-related filters) but we don't
  want to do anything related to resource consumption.

  Currently the scheduler will contact placement to get a pre-filtered
  list of compute nodes with sufficient free resources for the instance
  in question.  If the instance is on a compute node that is close to
  full, this may result in the current compute node being filtered out
  of the list, which will result in a noValidHost exception.

  Ideally, in the case where we are doing a rebuild-to-same-host we
  would simply retrieve the information for the current compute node
  from the DB instead of from placement, and then run the image-related
  scheduler filters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747500] Re: SCSS entries specific to fwaas was not migrated from horizon

2018-02-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/541055
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas-dashboard/commit/?id=91d182f2d3fc300a55f4821d2091bc36701ff5d7
Submitter: Zuul
Branch:master

commit 91d182f2d3fc300a55f4821d2091bc36701ff5d7
Author: Akihiro Motoki 
Date:   Tue Feb 6 07:42:20 2018 +0900

Migrate SCSS entries from horizon

They are forgot to split out from horizon.

Change-Id: I2e0fc3c7871116616fe2db50c1ab5a0b2b8573a4
Closes-Bug: #1747500


** Changed in: neutron-fwaas-dashboard
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1747500

Title:
  SCSS entries specific to fwaas was not migrated from horizon

Status in OpenStack Dashboard (Horizon):
  New
Status in Neutron FWaaS dashboard:
  Fix Released

Bug description:
  Some FWaaS specific SCSS entries are remained in horizon. They need to
  be migrated to neutron-fwaas-dashboard.

  
https://github.com/openstack/horizon/blob/bf8c8242dab85020dfed8d639022f9ced00403e6/openstack_dashboard/static/dashboard/scss/components/_resource_topology.scss#L208-L213

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1747500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1749797] Re: placement returns 503 when keystone is down

2018-02-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/546108
Committed: 
https://git.openstack.org/cgit/openstack/keystonemiddleware/commit/?id=d3352ff422db6ba6a5e7bd4f7220af0d97efd0ac
Submitter: Zuul
Branch:master

commit d3352ff422db6ba6a5e7bd4f7220af0d97efd0ac
Author: Chris Dent 
Date:   Tue Feb 20 10:31:49 2018 +

Identify the keystone service when raising 503

When the keystonemiddleware is used directly in the WSGI stack of an
application, the 503 that is raised when the keystone service errors
or cannot be reached needs to identify that keystone is the service
that has failed, otherwise it appears to the client that it is the
service they are trying to access is down, which is misleading.

This addresses the problem in the most straightforward way possible:
the exception that causes the 503 is given a message including the
word "Keystone".

The call method in BaseAuthTokenTestCase gains an
expected_body_string kwarg. If not None, the response body (as
a six.text_type) is compared with the value.

Change-Id: Idf211e7bc99139744af232f5ea3ecb4be41551ca
Closes-Bug: #1747655
Closes-Bug: #1749797


** Changed in: keystonemiddleware
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749797

Title:
  placement returns 503 when keystone is down

Status in keystonemiddleware:
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  See the logs here: http://logs.openstack.org/50/544750/8/check/ironic-
  grenade-dsvm-multinode-multitenant/5713fb8/logs/screen-placement-
  api.txt.gz#_Feb_15_17_58_22_463228

  This is during an upgrade while Keystone is down. Placement returns a
  503 because it cannot reach keystone.

  I'm not sure what the expected behavior should be, but a 503 feels
  wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystonemiddleware/+bug/1749797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750623] Re: rebuild to same host with different image shouldn't check with placement

2018-02-20 Thread Matt Riedemann
** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/pike
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750623

Title:
  rebuild to same host with different image shouldn't check with
  placement

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  When doing a rebuild-to-same-host but with a different image, all we
  really want to do is ensure that the image properties for the new
  image are still valid for the current host.  Accordingly we need to go
  through the scheduler (to run the image-related filters) but we don't
  want to do anything related to resource consumption.

  Currently the scheduler will contact placement to get a pre-filtered
  list of compute nodes with sufficient free resources for the instance
  in question.  If the instance is on a compute node that is close to
  full, this may result in the current compute node being filtered out
  of the list, which will result in a noValidHost exception.

  Ideally, in the case where we are doing a rebuild-to-same-host we
  would simply retrieve the information for the current compute node
  from the DB instead of from placement, and then run the image-related
  scheduler filters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404867] Re: Volume remains in-use status, if instance booted from volume is deleted in error state

2018-02-20 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => melanie witt (melwitt)

** Changed in: nova/pike
   Status: New => In Progress

** Changed in: nova/queens
   Status: New => In Progress

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/pike
 Assignee: (unassigned) => Mohammed Naser (mnaser)

** Changed in: nova/queens
 Assignee: (unassigned) => Mohammed Naser (mnaser)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404867

Title:
  Volume remains in-use status, if instance booted from volume is
  deleted in error state

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  If the instance is booted from volume and goes in to error state due to some 
reason.
  Volume from which instance is booted, remains in-use state even the instance 
is deleted.
  IMO, volume should be detached so that it can be used to boot other instance.

  Steps to reproduce:

  1. Log in to Horizon, create a new volume.
  2. Create an Instance using newly created volume.
  3. Verify instance is in active state.
  $ source devstack/openrc demo demo
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ACTIVE | -  | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  Note:
  Use shelve-unshelve api to see the instance goes into error state.
  unshelving volumed back instance does not work and sets instance state to 
error state (ref: https://bugs.launchpad.net/nova/+bug/1404801)

  4. Shelve the instance
  $ nova shelve 

  5. Verify the status is SHELVED_OFFLOADED.
  $ nova list
  
+--+--+---++-+--+
  | ID   | Name | Status| Task 
State | Power State | Networks |
  
+--+--+---++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | SHELVED_OFFLOADED | - 
 | Shutdown| private=10.0.0.3 |
  
+--+--+---++-+--+

  6. Unshelve the instance.
  $ nova unshelve 

  5. Verify the instance is in Error state.
  $ nova list
  
+--+--+---++-+--+
  | ID   | Name | Status| Task 
State | Power State | Networks |
  
+--+--+---++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | Error | 
unshelving | Spawning| private=10.0.0.3 |
  
+--+--+---++-+--+

  6. Delete the instance using Horizon.

  7. Verify that volume still in in-use state
  $ cinder list
  
+--++--+--+-+--+--+
  |  ID  | Status | Name | Size | Volume Type | 
Bootable | Attached to  |
  
+--++--+--+-+--+--+
  | 4aeefd25-10aa-42c2-9a2d-1c89a95b4d4f | in-use | test |  1   | lvmdriver-1 | 
  true   | 8f7bdc24-1891-4bbb-8f0c-732b9cbecae7 |
  
+--++--+--+-+--+--+

  8. In Horizon, volume "Attached To" information is displayed as
  "Attached to None on /dev/vda".

  9. User is not able to delete this volume, or attached it to another
  instance as it is still in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHel

[Yahoo-eng-team] [Bug 1750660] [NEW] The v3 project API should account for different scopes

2018-02-20 Thread Lance Bragstad
Public bug reported:

Keystone implemented scope_types for oslo.policy RuleDefault objects in
the Queens release. In order to take full advantage of scope_types,
keystone is going to have to evolve policy enforcement checks in the
user API. This is documented in each patch with FIXMEs [0].

The following acceptance criteria describes how the v3 project API
should behave with tokens from multiple scopes. WARNING: It also assumes
that project tags are accessible to anyone with authorization to the
project. If system administrators, or operators, intend to tag projects
for administrative use (e.g. billing tags for accounting purposes),
those tags will be accessible to users with access to the project:

GET /v3/projects/{project_id}

- Someone with a system role assignment that passes the check string should be 
able to get any project in the deployment (system-scoped)
- Someone with a domain role assignment that passes the check string should be 
able to get any project within the domain they administer (domain-scoped)
- Someone with a project role assignment that passes the check string should be 
able to get the project or any child project of the project they have a role 
assignment on (project-scoped)

GET /v3/projects

- Someone with a system role assignment that passes the check string should be 
able to list all projects in the deployment (system-scoped)
- Someone with a domain role assignment that passes the check string should be 
able to list all projects within the domain they administer (domain-scoped)
- Someone with a project role assignment that passes the check string should be 
able to list all child projects of the project they administer (project-scoped)?

GET /v3/users/{user_id}/projects

- Someone with a system role assignment that passes the check string should be 
able to list all projects for any user in the deployment (system-scoped)
- Someone with a domain role assignment that passes the check string should be 
able to list projects for a user within the domain they administer, the 
response will only include projects in the domain they administer 
(domain-scoped)

POST /v3/projects

- Someone with a system role assignment that passes the check string should be 
able to create projects anywhere in the deployment (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to create projects within the domain they administer 
(domain-scoped)
- Someone with a project role assignment that passes the check string should 
only be able to create child projects of the project they administer 
(project-scoped)

PATCH /v3/projects/{project_id}

- Someone with a system role assignment that passes the check string should be 
able to update any project in the deployment (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to update projects within the domain they administer 
(domain-scoped)
- Someone with a project role assignment that passes the check string should 
only be able to update projects they administer or child projects of the ones 
they administer (project-scoped)?

DELETE /v3/projects/{project_id}

- Someone with a system role assignment that passes the check string should be 
able to delete any project in the deployment
- Someone with a domain role assignment that passes the check string should 
only be able to delete a project within the domain they administer 
(domain-scoped)
- Someone with a project role assignment that passes the check string should 
only be able to delete a child project of the project they administer 
(project-scoped)? 

GET /v3/projects/{project_id}/tags

- Someone with a system role assignment that passes the check string should be 
able to list tags for any project in the deployment (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to list tags for a project within the domain they administer 
(domain-scoped)
- Someone with a project role assignment that passes the check string should be 
able to list project tags associated to the project they have authorization on 
(project-scoped)

GET /v3/projects/{project_id}/tags/{value}

- Someone with a system role assignment that passes the check string should be 
able to get a project tag for any project in the deployment (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to get a project tag for a project within the domain they 
administer (domain-scoped)
- Someone with a project role assignment that passes the check string should 
only be able to get tags for a project, or a child project, they have 
authorization on.

PUT /v3/projects/{project_id}/tags

- Someone with a system role assignment that passes the check string should be 
able to up tags for any project in the deployment (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to update tags for pr

[Yahoo-eng-team] [Bug 1750666] [NEW] Deleting an instance before scheduling with BFV fails to detach volume

2018-02-20 Thread Mohammed Naser
Public bug reported:

If you try to boot and instance and delete it early before scheduling,
the '_delete_while_booting' codepath hits
`_attempt_delete_of_buildrequest` which tries to remove the block device
mappings.

However, if the cloud contains compute nodes before Pike, no block
device mappings will be present in the database (because they are only
saved if using the new attachment flow), which means the attachment IDs
are empty and the volume delete fails:

2018-02-20 16:02:25,063 WARNING [nova.compute.api] Ignoring volume
cleanup failure due to Object action obj_load_attr failed because:
attribute attachment_id not lazy-loadable

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750666

Title:
  Deleting an instance before scheduling with BFV fails to detach volume

Status in OpenStack Compute (nova):
  New

Bug description:
  If you try to boot and instance and delete it early before scheduling,
  the '_delete_while_booting' codepath hits
  `_attempt_delete_of_buildrequest` which tries to remove the block
  device mappings.

  However, if the cloud contains compute nodes before Pike, no block
  device mappings will be present in the database (because they are only
  saved if using the new attachment flow), which means the attachment
  IDs are empty and the volume delete fails:

  2018-02-20 16:02:25,063 WARNING [nova.compute.api] Ignoring volume
  cleanup failure due to Object action obj_load_attr failed because:
  attribute attachment_id not lazy-loadable

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750672] [NEW] failure to generate Nova's doc in Python 3.6

2018-02-20 Thread Thomas Goirand
Public bug reported:

When generating the sphinx doc in Debian Sid with Python 3.6 for the
Queens RC1 release of Nova, I get the below stack dump, though it passes
under Python 2.7. A fix would be more than welcome, cause I'm removing
all traces of Python 2.7, including sphinx stuff and all modules.

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/sphinx/cmdline.py", line 306, in main
app.build(opts.force_all, filenames)
  File "/usr/lib/python3/dist-packages/sphinx/application.py", line 339, in 
build
self.builder.build_update()
  File "/usr/lib/python3/dist-packages/sphinx/builders/__init__.py", line 329, 
in build_update
'out of date' % len(to_build))
  File "/usr/lib/python3/dist-packages/sphinx/builders/__init__.py", line 342, 
in build
updated_docnames = set(self.env.update(self.config, self.srcdir, 
self.doctreedir))
  File "/usr/lib/python3/dist-packages/sphinx/environment/__init__.py", line 
601, in update
self._read_serial(docnames, self.app)
  File "/usr/lib/python3/dist-packages/sphinx/environment/__init__.py", line 
621, in _read_serial
self.read_doc(docname, app)
  File "/usr/lib/python3/dist-packages/sphinx/environment/__init__.py", line 
758, in read_doc
pub.publish()
  File "/usr/lib/python3/dist-packages/docutils/core.py", line 217, in publish
self.settings)
  File "/usr/lib/python3/dist-packages/sphinx/io.py", line 74, in read
self.parse()
  File "/usr/lib/python3/dist-packages/docutils/readers/__init__.py", line 78, 
in parse
self.parser.parse(self.input, document)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/__init__.py", line 
191, in parse
self.statemachine.run(inputlines, document, inliner=self.inliner)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
172, in run
input_source=document['source'])
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 239, in 
run
context, state, transitions)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 460, in 
check_line
return method(match, context, next_state)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
2754, in underline
self.section(title, source, style, lineno - 1, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
328, in section
self.new_subsection(title, lineno, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
396, in new_subsection
node=section_node, match_titles=True)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
283, in nested_parse
node=node, match_titles=match_titles)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
197, in run
results = StateMachineWS.run(self, input_lines, input_offset)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 239, in 
run
context, state, transitions)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 460, in 
check_line
return method(match, context, next_state)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
2754, in underline
self.section(title, source, style, lineno - 1, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
328, in section
self.new_subsection(title, lineno, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
396, in new_subsection
node=section_node, match_titles=True)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
283, in nested_parse
node=node, match_titles=match_titles)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
197, in run
results = StateMachineWS.run(self, input_lines, input_offset)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 239, in 
run
context, state, transitions)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 460, in 
check_line
return method(match, context, next_state)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
2754, in underline
self.section(title, source, style, lineno - 1, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
328, in section
self.new_subsection(title, lineno, messages)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
396, in new_subsection
node=section_node, match_titles=True)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
283, in nested_parse
node=node, match_titles=match_titles)
  File "/usr/lib/python3/dist-packages/docutils/parsers/rst/states.py", line 
197, in run
results = StateMachineWS.run(self, input_lines, input_offset)
  File "/usr/lib/python3/dist-packages/docutils/statemachine.py", line 239, in 
run
context, state, transitions)
  File "/usr/lib/python3/dist-

[Yahoo-eng-team] [Bug 1750669] [NEW] The v3 grant API should account for different scopes

2018-02-20 Thread Lance Bragstad
Public bug reported:

Keystone implemented scope_types for oslo.policy RuleDefault objects in
the Queens release. In order to take full advantage of scope_types,
keystone is going to have to evolve policy enforcement checks in the
user API. This is documented in each patch with FIXMEs [0].

The following acceptance criteria describes how the v3 grant API should
behave with tokens from multiple scopes.

GET /target/{target_id}/actor/{actor_id}/roles/{role_id}

- Someone with a system role assignment that passes the check string should be 
able to check any grant in the deployment (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to make checks against users, domains, and projects they 
administer (domain-scoped)

GET /targets/{target_id}/actors/{actor_id}/roles

- Someone with a system role assignment that passes the check string should be 
able to list all grants in the deployment, regardless of the target or actor 
(system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to list grants against users and projects within the domain they 
administer (domain-scoped)

PUT /target/{target_id}/actor/{actor_id}/roles/{role_id}

- Someone with a system role assignment that passes the check string should be 
able to create grants, regardless of the actor or target (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to create grants with users and projects within the domain they 
administer (domain-scoped)

DELETE /target/{target_id}/actor/{actor_id}/roles/{role_id}

- Someone with a system role assignment that passes the check string should be 
able to remove grants regardless of the actor or target (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to remove grants from users and projects within the domain they 
administer (domain-scoped)

[0]
https://github.com/openstack/keystone/blob/68df7bf1f3b3d6ab3f691f59f1ce6de6b0b1deab/keystone/common/policies/grant.py#L62-L67

** Affects: keystone
 Importance: High
 Status: Triaged


** Tags: policy

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => High

** Description changed:

  Keystone implemented scope_types for oslo.policy RuleDefault objects in
  the Queens release. In order to take full advantage of scope_types,
  keystone is going to have to evolve policy enforcement checks in the
  user API. This is documented in each patch with FIXMEs [0].
  
  The following acceptance criteria describes how the v3 assignment API
  should behave with tokens from multiple scopes.
  
  GET /target/{target_id}/actor/{actor_id}/roles/{role_id}
  
  - Someone with a system role assignment that passes the check string should 
be able to check any grant in the deployment (system-scoped)
  - Someone with a domain role assignment that passes the check string should 
only be able to make checks against users, domains, and projects they 
administer (domain-scoped)
  
  GET /targets/{target_id}/actors/{actor_id}/roles
  
  - Someone with a system role assignment that passes the check string should 
be able to list all grants in the deployment, regardless of the target or actor 
(system-scoped)
  - Someone with a domain role assignment that passes the check string should 
only be able to list grants against users and projects within the domain they 
administer (domain-scoped)
  
  PUT /target/{target_id}/actor/{actor_id}/roles/{role_id}
  
  - Someone with a system role assignment that passes the check string should 
be able to create grants, regardless of the actor or target (system-scoped)
  - Someone with a domain role assignment that passes the check string should 
only be able to create grants with users and projects within the domain they 
administer (domain-scoped)
  
  DELETE /target/{target_id}/actor/{actor_id}/roles/{role_id}
  
  - Someone with a system role assignment that passes the check string should 
be able to remove grants regardless of the actor or target (system-scoped)
  - Someone with a domain role assignment that passes the check string should 
only be able to remove grants from users and projects within the domain they 
administer (domain-scoped)
  
  [0]
- 
https://github.com/openstack/keystone/blob/68df7bf1f3b3d6ab3f691f59f1ce6de6b0b1deab/keystone/common/policies/project.py
+ 
https://github.com/openstack/keystone/blob/68df7bf1f3b3d6ab3f691f59f1ce6de6b0b1deab/keystone/common/policies/grant.py#L62-L67

** Summary changed:

- The v3 assignment API should account for different scopes
+ The v3 grant API should account for different scopes

** Description changed:

  Keystone implemented scope_types for oslo.policy RuleDefault objects in
  the Queens release. In order to take full advantage of scope_types,
  keystone is going to have to evolve policy enforcement checks in the
  user API. This

[Yahoo-eng-team] [Bug 1750673] [NEW] The v3 role assignment API should account for different scopes

2018-02-20 Thread Lance Bragstad
Public bug reported:

Keystone implemented scope_types for oslo.policy RuleDefault objects in
the Queens release. In order to take full advantage of scope_types,
keystone is going to have to evolve policy enforcement checks in the
user API. This is documented in each patch with FIXMEs [0].

The following acceptance criteria describes how the v3 role assignment
API should behave with tokens from multiple scopes.

GET /v3/role_assignments

- Someone with a system role assignment that passes the check string should be 
able to list role assignments for any combination of entities in the deployment 
(system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to list role assignment for users and group in the domain they 
administer, or projects within that domain (domain-scoped)
- Someone with a project role assignment that passes the check string should 
only be able to list role assignments for users and groups that have 
assignments on the project they administer (project-scoped)

[0]
https://github.com/openstack/keystone/blob/68df7bf1f3b3d6ab3f691f59f1ce6de6b0b1deab/keystone/common/policies/role_assignment.py#L21-L28

** Affects: keystone
 Importance: High
 Status: Triaged


** Tags: policy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1750673

Title:
  The v3 role assignment API should account for different scopes

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  Keystone implemented scope_types for oslo.policy RuleDefault objects
  in the Queens release. In order to take full advantage of scope_types,
  keystone is going to have to evolve policy enforcement checks in the
  user API. This is documented in each patch with FIXMEs [0].

  The following acceptance criteria describes how the v3 role assignment
  API should behave with tokens from multiple scopes.

  GET /v3/role_assignments

  - Someone with a system role assignment that passes the check string should 
be able to list role assignments for any combination of entities in the 
deployment (system-scoped)
  - Someone with a domain role assignment that passes the check string should 
only be able to list role assignment for users and group in the domain they 
administer, or projects within that domain (domain-scoped)
  - Someone with a project role assignment that passes the check string should 
only be able to list role assignments for users and groups that have 
assignments on the project they administer (project-scoped)

  [0]
  
https://github.com/openstack/keystone/blob/68df7bf1f3b3d6ab3f691f59f1ce6de6b0b1deab/keystone/common/policies/role_assignment.py#L21-L28

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1750673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750678] [NEW] The ec2 credential API should account for different scopes

2018-02-20 Thread Lance Bragstad
Public bug reported:

Keystone implemented scope_types for oslo.policy RuleDefault objects in
the Queens release. In order to take full advantage of scope_types,
keystone is going to have to evolve policy enforcement checks in the
user API. This is documented in each patch with FIXMEs [0].

The following acceptance criteria describes how the v3 ec2 credential
API should behave with tokens from multiple scopes:

GET /v3/users/{user_id}/credentials/OS-EC2/{credential_id}

- Someone with a system role assignment that passes the check string should be 
able to view credentials for any user in the deployment (system-scoped)
- Someone with a valid token should only be able to view credentials they've 
created

GET /v3/users/{user_id}/credentials/OS-EC2/

- Someone with a system role assignment that passes the check string should be 
able to list all credentials in the deployment (system-scoped)
- Someone with a valid token should only be able to list credentials associated 
to their user

POST /v3/users/{user_id}/credentials/OS-EC2/

- Someone with a system role assignment that passes the check string should be 
able to create ec2 credentials for other users (system-scoped)
- Someone with a valid token should be able to create ec2 credentials for 
themselves

DELETE /v3/users/{user_id}/credentials/OS-EC2/{credential_id}

- Someone with a system role assignment that passes the check string should be 
able to delete any ec2 credential in the deployment (system-scoped)
- Someone with a valid token should only be able to delete credentials 
associated to their user account

[0]
https://github.com/openstack/keystone/blob/68df7bf1f3b3d6ab3f691f59f1ce6de6b0b1deab/keystone/common/policies/ec2_credential.py#L21-L31

** Affects: keystone
 Importance: High
 Status: Triaged


** Tags: policy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1750678

Title:
  The ec2 credential API should account for different scopes

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  Keystone implemented scope_types for oslo.policy RuleDefault objects
  in the Queens release. In order to take full advantage of scope_types,
  keystone is going to have to evolve policy enforcement checks in the
  user API. This is documented in each patch with FIXMEs [0].

  The following acceptance criteria describes how the v3 ec2 credential
  API should behave with tokens from multiple scopes:

  GET /v3/users/{user_id}/credentials/OS-EC2/{credential_id}

  - Someone with a system role assignment that passes the check string should 
be able to view credentials for any user in the deployment (system-scoped)
  - Someone with a valid token should only be able to view credentials they've 
created

  GET /v3/users/{user_id}/credentials/OS-EC2/

  - Someone with a system role assignment that passes the check string should 
be able to list all credentials in the deployment (system-scoped)
  - Someone with a valid token should only be able to list credentials 
associated to their user

  POST /v3/users/{user_id}/credentials/OS-EC2/

  - Someone with a system role assignment that passes the check string should 
be able to create ec2 credentials for other users (system-scoped)
  - Someone with a valid token should be able to create ec2 credentials for 
themselves

  DELETE /v3/users/{user_id}/credentials/OS-EC2/{credential_id}

  - Someone with a system role assignment that passes the check string should 
be able to delete any ec2 credential in the deployment (system-scoped)
  - Someone with a valid token should only be able to delete credentials 
associated to their user account

  [0]
  
https://github.com/openstack/keystone/blob/68df7bf1f3b3d6ab3f691f59f1ce6de6b0b1deab/keystone/common/policies/ec2_credential.py#L21-L31

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1750678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750676] [NEW] The v3 authentication API should account for different scopes

2018-02-20 Thread Lance Bragstad
Public bug reported:

Keystone implemented scope_types for oslo.policy RuleDefault objects in
the Queens release. In order to take full advantage of scope_types,
keystone is going to have to evolve policy enforcement checks in the
user API. This is documented in each patch with FIXMEs [0].

The following acceptance criteria describes how the v3 authentication
API should behave with tokens from multiple scopes:

HEAD /v3/auth/tokens

- Someone with a system role assignment that passes the check string should be 
able to check any token issued from a deployment's keystone (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to check tokens for users that are members of the domain they 
administer (domain-scoped)
- Someone with a project role assignment that passes the check string should 
only be able to check tokens that are scoped to the project they administer 
(project-scoped)
- Someone with a token should be able to validate the token with itself

GET /v3/auth/tokens

- Someone with a system role assignment that passes the check string should be 
able to validate any token issued from a deployment's keystone (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to validate tokens for users that are members of the domain they 
administer (domain-scoped)
- Someone with a project role assignment that passes the check string should 
only be able to validate tokens that are scoped to the project they administer 
(project-scoped)
- Someone with a token should be able to validate the token with itself

DELETE /v3/auth/tokens

- Someone with a system role assignment that passes the check string should be 
able to revoke any token issued by the deployment's keystone (system-scoped)
- Someone with a domain role assignment that passes the check string should 
only be able to revoke tokens scoped to the domain they administer, or projects 
within that domain (domain-scoped)
- Someone with a project role assignment that passes the check string should 
only be able to revoke tokens scoped to the project they administer 
(project-scoped)
- Any user should be able to invalidate their own token by setting it as the 
X-Auth-Header and the X-Subject-Header

[0]
https://github.com/openstack/keystone/blob/68df7bf1f3b3d6ab3f691f59f1ce6de6b0b1deab/keystone/common/policies/token.py#L21-L32

** Affects: keystone
 Importance: High
 Status: Triaged


** Tags: policy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1750676

Title:
  The v3 authentication API should account for different scopes

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  Keystone implemented scope_types for oslo.policy RuleDefault objects
  in the Queens release. In order to take full advantage of scope_types,
  keystone is going to have to evolve policy enforcement checks in the
  user API. This is documented in each patch with FIXMEs [0].

  The following acceptance criteria describes how the v3 authentication
  API should behave with tokens from multiple scopes:

  HEAD /v3/auth/tokens

  - Someone with a system role assignment that passes the check string should 
be able to check any token issued from a deployment's keystone (system-scoped)
  - Someone with a domain role assignment that passes the check string should 
only be able to check tokens for users that are members of the domain they 
administer (domain-scoped)
  - Someone with a project role assignment that passes the check string should 
only be able to check tokens that are scoped to the project they administer 
(project-scoped)
  - Someone with a token should be able to validate the token with itself

  GET /v3/auth/tokens

  - Someone with a system role assignment that passes the check string should 
be able to validate any token issued from a deployment's keystone 
(system-scoped)
  - Someone with a domain role assignment that passes the check string should 
only be able to validate tokens for users that are members of the domain they 
administer (domain-scoped)
  - Someone with a project role assignment that passes the check string should 
only be able to validate tokens that are scoped to the project they administer 
(project-scoped)
  - Someone with a token should be able to validate the token with itself

  DELETE /v3/auth/tokens

  - Someone with a system role assignment that passes the check string should 
be able to revoke any token issued by the deployment's keystone (system-scoped)
  - Someone with a domain role assignment that passes the check string should 
only be able to revoke tokens scoped to the domain they administer, or projects 
within that domain (domain-scoped)
  - Someone with a project role assignment that passes the check string should 
only be able to revoke tokens scoped to the project they administer 

[Yahoo-eng-team] [Bug 1750680] [NEW] Nova returns a traceback when it's unable to detach a volume still in use

2018-02-20 Thread David Vallee Delisle
Public bug reported:

Description
===
If libvirt is unable to detach a volume because it's still in-use by the guest 
(either mounted and/or file opened), nova returns a traceback.

Steps to reproduce
==

* Create an instance with volume attached using heat
* Make sure there's activity on the volume
* Delete stack

Expected result
===
We would expect nova to not return a traceback but a clean log about its 
incapacity to detach volume. If possible, that would be great if that exception 
was raised back to either cinder or heat.

Actual result
=
```
21495 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall [-] Dynamic 
interval looping call 'oslo_service.loopingcall._func' failed: 
DeviceDetachFailed: Device detach failed for vdf: Unable to detach from guest 
transient domain.
21496 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
21497 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 137, in 
_run_loop
21498 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
21499 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 415, in 
_func
21500 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall return 
self._sleep_time
21501 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
21502 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall 
self.force_reraise()
21503 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
21504 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall 
six.reraise(self.type_, self.value, self.tb)
21505 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 394, in 
_func
21506 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall result = 
f(*args, **kwargs)
21507 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 462, in 
_do_wait_and_retry_detach
21508 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall 
device=alternative_device_name, reason=reason)
21509 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall 
DeviceDetachFailed: Device detach failed for vdf: Unable to detach from guest 
transient domain.
```

Environment
===
* Red Hat Openstack 12
```
libvirt-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:28:48 
2018
libvirt-client-3.2.0-14.el7_4.7.x86_64  Fri Jan 26 15:26:07 
2018
libvirt-daemon-3.2.0-14.el7_4.7.x86_64  Fri Jan 26 15:26:02 
2018
libvirt-daemon-config-network-3.2.0-14.el7_4.7.x86_64   Fri Jan 26 15:26:06 
2018
libvirt-daemon-config-nwfilter-3.2.0-14.el7_4.7.x86_64  Fri Jan 26 15:26:05 
2018
libvirt-daemon-driver-interface-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 
2018
libvirt-daemon-driver-lxc-3.2.0-14.el7_4.7.x86_64   Fri Jan 26 15:26:06 
2018
libvirt-daemon-driver-network-3.2.0-14.el7_4.7.x86_64   Fri Jan 26 15:26:02 
2018
libvirt-daemon-driver-nodedev-3.2.0-14.el7_4.7.x86_64   Fri Jan 26 15:26:05 
2018
libvirt-daemon-driver-nwfilter-3.2.0-14.el7_4.7.x86_64  Fri Jan 26 15:26:04 
2018
libvirt-daemon-driver-qemu-3.2.0-14.el7_4.7.x86_64  Fri Jan 26 15:27:25 
2018
libvirt-daemon-driver-secret-3.2.0-14.el7_4.7.x86_64Fri Jan 26 15:26:04 
2018
libvirt-daemon-driver-storage-3.2.0-14.el7_4.7.x86_64   Fri Jan 26 15:27:29 
2018
libvirt-daemon-driver-storage-core-3.2.0-14.el7_4.7.x86_64  Fri Jan 26 15:27:25 
2018
libvirt-daemon-driver-storage-disk-3.2.0-14.el7_4.7.x86_64  Fri Jan 26 15:27:28 
2018
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 
15:27:29 2018
libvirt-daemon-driver-storage-iscsi-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:28 
2018
libvirt-daemon-driver-storage-logical-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 
15:27:27 2018
libvirt-daemon-driver-storage-mpath-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 
2018
libvirt-daemon-driver-storage-rbd-3.2.0-14.el7_4.7.x86_64   Fri Jan 26 15:27:27 
2018
libvirt-daemon-driver-storage-scsi-3.2.0-14.el7_4.7.x86_64  Fri Jan 26 15:27:27 
2018
libvirt-daemon-kvm-3.2.0-14.el7_4.7.x86_64  Fri Jan 26 15:27:29 
2018
libvirt-libs-3.2.0-14.el7_4.7.x86_64Fri Jan 26 15:26:00 
2018
libvirt-python-3.2.0-3.el7_4.1.x86_64   Fri Jan 26 15:26:04 
2018
openstack-nova-api-16.0.2-9.el7ost.noarch   Fri Jan 26 15:28:29 
2018
openstack-nova-common-16.0.2-9.el7ost.noarchFri Jan 26 15:28:20 
2018
openstack-nova-compute-16.0.2-9.el7ost.n

[Yahoo-eng-team] [Bug 1749747] Re: Queens promotion - error on ControllerDeployment_Step3.0 - error running glance_api_db_sync

2018-02-20 Thread Alan Pevec
*** This bug is a duplicate of bug 1749640 ***
https://bugs.launchpad.net/bugs/1749640

** This bug is no longer a duplicate of bug 1749641
   Overcloud deployment failing in promotion jobs at glance-manage db_sync
** This bug has been marked a duplicate of bug 1749640
   db sync fails for mysql while adding triggers

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1749747

Title:
  Queens promotion - error on ControllerDeployment_Step3.0 - error
  running glance_api_db_sync

Status in Glance:
  New
Status in tripleo:
  Triaged

Bug description:
  Multinode jobs are failing in the Queens promotion pipeline with the
  following error:

  overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.0:
    resource_type: OS::Heat::StructuredDeployment
    physical_resource_id: f8c4b43d-b5b0-48ea-a124-ecc2a10be1be
    status: CREATE_FAILED
    status_reason: |
  Error: resources[0]: Deployment to server failed: deploy_status_code : 
Deployment exited with non-zero status code: 2

  .

  Error running ['docker', 'run', '--name', 'glance_api_db_sync', '--
  label', 'config_id=tripleo_step3', '--label',
  'container_name=glance_api_db_sync', '--label', 'managed_by=paunch', '
  --label', 'config_data={\"image\": \"192.168.24.1:8787/queens/centos-
  binary-glance-api:813c7290c3a8d77eef397526d1ea6dc108943b0d_90604cd8\",
  \"environment\": ...],

  ...
   "DBError: (pymysql.err.InternalError) (1419, u'You do not have the SUPER 
privilege and binary logging is enabled (you *might* want to use the less safe 
log_bin_trust_function_creators variable)') ...

  see full logs at:

  https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-
  centos-7-multinode-1ctlr-
  
featureset017-queens/3977a1f/undercloud/home/jenkins/failed_deployment_list.log.txt.gz

  and

  https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-
  centos-7-multinode-1ctlr-
  
featureset010-queens/904f18d/undercloud/home/jenkins/overcloud_deploy.log.txt.gz#_2018-02-15_15_57_01

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1749747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750701] [NEW] NUMATopologyFilter doesn't exclude Hyper-V when cpu pinning specified.

2018-02-20 Thread Tetsuro Nakamura
Public bug reported:

Description
===

As described in [1], Hyper-V driver supports NUMA placement policies.
But it doesn't support cpu pinning policy[2].
So the host should be excluded in NUMATopologyFilter if the end user try to 
build a VM with cpu pinning policy.

[1] 
https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-numa-placement-policies
[2] 
https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies

Environment & Steps to reproduce
==

1. Install OpenStack with Hyper-V driver (with NUMATopologyFilter set in 
nova.conf)
2. Try to Build a VM with "cpu_policy=dedicated".

Expected & Actual result
===

Expected: No valid host error
Actual: 

Logs & Configs
==



** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750701

Title:
  NUMATopologyFilter doesn't exclude Hyper-V when cpu pinning specified.

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  As described in [1], Hyper-V driver supports NUMA placement policies.
  But it doesn't support cpu pinning policy[2].
  So the host should be excluded in NUMATopologyFilter if the end user try to 
build a VM with cpu pinning policy.

  [1] 
https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-numa-placement-policies
  [2] 
https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies

  Environment & Steps to reproduce
  ==

  1. Install OpenStack with Hyper-V driver (with NUMATopologyFilter set in 
nova.conf)
  2. Try to Build a VM with "cpu_policy=dedicated".

  Expected & Actual result
  ===

  Expected: No valid host error
  Actual: 

  Logs & Configs
  ==

  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1749466] Re: postgresql db migration broken in queens

2018-02-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/545211
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=28fb47092f3751de11cc8e61f06e451a2fa268f2
Submitter: Zuul
Branch:master

commit 28fb47092f3751de11cc8e61f06e451a2fa268f2
Author: Abhishek Kekane 
Date:   Tue Feb 20 16:04:23 2018 +

Migration support for postgresql

After recent change in manage command [1] deploying glance with
postgres using stack.sh fails as db_sync internally uses E-M-C
and it has support only for mysql.

Added flag online_migration which will be False in case of
db_sync so that enigne check will be eliminated and True in case
of running expand, migrate, contract to perform validatio check on
engine type.

[1] https://review.openstack.org/#/c/433934/

Closes-Bug: #1749466
Change-Id: Ifddeff139e212564118520f3150db198ab1c94f4


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1749466

Title:
  postgresql db migration broken in queens

Status in Glance:
  Fix Released

Bug description:
  Hopefully we can close this as invalid, but we need to track work on
  this somehow.

  The glance-manage tool has been changed to use EMC for the db-sync
  operation with Change-Id: I2653560d637a6696f936b49e87f16326fd601dfe

  That change broke devstack installs with postgres, which was fixed by
  Change-id: Ie6c66da8b26930d783526a4b92b77baf493825f9

  We need to verify that the migration of an existing postgres database
  works with the glance-manage tool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1749466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404867] Re: Volume remains in-use status, if instance booted from volume is deleted in error state

2018-02-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/340614
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b3f39244a3eacd6fb141de61850cbd84fecdb544
Submitter: Zuul
Branch:master

commit b3f39244a3eacd6fb141de61850cbd84fecdb544
Author: ankitagrawal 
Date:   Wed Sep 23 03:58:19 2015 -0700

Clean up ports and volumes when deleting ERROR instance

Usually, when instance.host = None, it means the instance was never
scheduled. However, the exception handling routine in compute manager
[1] will set instance.host = None and set instance.vm_state = ERROR
if the instance fails to build on the compute host. If that happens, we
end up with an instance with host = None and vm_state = ERROR which may
have ports and volumes still allocated.

This adds some logic around deleting the instance when it may have
ports or volumes allocated.

  1. If the instance is not in ERROR or SHELVED_OFFLOADED state, we
 expect instance.host to be set to a compute host. So, if we find
 instance.host = None in states other than ERROR or
 SHELVED_OFFLOADED, we consider the instance to have failed
 scheduling and not require ports or volumes to be freed, and we
 simply destroy the instance database record and return. This is
 the "delete while booting" scenario.

  2. If the instance is in ERROR because of a failed build or is
 SHELVED_OFFLOADED, we expect instance.host to be None even though
 there could be ports or volumes allocated. In this case, run the
 _local_delete routine to clean up ports and volumes and delete the
 instance database record.

Co-Authored-By: Ankit Agrawal 
Co-Authored-By: Samuel Matzek 
Co-Authored-By: melanie witt 

Closes-Bug: 1404867
Closes-Bug: 1408527

[1] 
https://github.com/openstack/nova/blob/55ea961/nova/compute/manager.py#L1927-L1929

Change-Id: I4dc6c8bd3bb6c135f8a698af41f5d0e026c39117


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404867

Title:
  Volume remains in-use status, if instance booted from volume is
  deleted in error state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  If the instance is booted from volume and goes in to error state due to some 
reason.
  Volume from which instance is booted, remains in-use state even the instance 
is deleted.
  IMO, volume should be detached so that it can be used to boot other instance.

  Steps to reproduce:

  1. Log in to Horizon, create a new volume.
  2. Create an Instance using newly created volume.
  3. Verify instance is in active state.
  $ source devstack/openrc demo demo
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ACTIVE | -  | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  Note:
  Use shelve-unshelve api to see the instance goes into error state.
  unshelving volumed back instance does not work and sets instance state to 
error state (ref: https://bugs.launchpad.net/nova/+bug/1404801)

  4. Shelve the instance
  $ nova shelve 

  5. Verify the status is SHELVED_OFFLOADED.
  $ nova list
  
+--+--+---++-+--+
  | ID   | Name | Status| Task 
State | Power State | Networks |
  
+--+--+---++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | SHELVED_OFFLOADED | - 
 | Shutdown| private=10.0.0.3 |
  
+--+--+---++-+--+

  6. Unshelve the instance.
  $ nova unshelve 

  5. Verify the instance is in Error state.
  $ nova list
  
+--+--+---++-+--+
  | ID   | Name | Status| Task 
State | Power State | Networks |
  
+--+--+---++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | Error | 
uns

[Yahoo-eng-team] [Bug 1750728] [NEW] api-ref: Missing response codes in resource provider APIs

2018-02-20 Thread Takashi NATSUME
Public bug reported:

400 is missing in "Create resource provider" API

https://developer.openstack.org/api-ref/placement/#create-resource-provider
https://github.com/openstack/nova/blob/c9087bd6c5ed0ec0e7c8fcd8f48c1fcb9f3d6a4a/nova/api/openstack/placement/handlers/resource_provider.py#L101

400 is missing in "List resource providers" API

https://developer.openstack.org/api-ref/placement/#resource-providers
https://github.com/openstack/nova/blob/c9087bd6c5ed0ec0e7c8fcd8f48c1fcb9f3d6a4a/nova/api/openstack/placement/handlers/resource_provider.py#L214

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: New


** Tags: api-ref placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750728

Title:
  api-ref:  Missing response codes in resource provider APIs

Status in OpenStack Compute (nova):
  New

Bug description:
  400 is missing in "Create resource provider" API

  https://developer.openstack.org/api-ref/placement/#create-resource-provider
  
https://github.com/openstack/nova/blob/c9087bd6c5ed0ec0e7c8fcd8f48c1fcb9f3d6a4a/nova/api/openstack/placement/handlers/resource_provider.py#L101

  400 is missing in "List resource providers" API

  https://developer.openstack.org/api-ref/placement/#resource-providers
  
https://github.com/openstack/nova/blob/c9087bd6c5ed0ec0e7c8fcd8f48c1fcb9f3d6a4a/nova/api/openstack/placement/handlers/resource_provider.py#L214

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750735] [NEW] [OVO] UT fails when setting new_facade to True

2018-02-20 Thread Lujin Luo
Public bug reported:

How to reproduce:
1) Set new_facade = True in any OVO object. I tried PortBinding(), Port() and 
Network().
2) Run python -m testtools.run neutron/tests/unit/objects/test_network.py
   or python -m testtools.run neutron/tests/unit/objects/test_port.py
3) Example of failures:
==
ERROR: 
neutron.tests.unit.objects.test_network.NetworkObjectIfaceTestCase.test_update_updates_from_db_object
--
Traceback (most recent call last):
  File "neutron/tests/base.py", line 132, in func
return f(self, *args, **kwargs)
  File "neutron/tests/base.py", line 132, in func
return f(self, *args, **kwargs)
  File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
return func(*args, **keywargs)
  File "neutron/tests/unit/objects/test_base.py", line 1167, in 
test_update_updates_from_db_object
obj.update()
  File "neutron/objects/base.py", line 319, in decorator
self.obj_context.session.refresh(self.db_obj)
  File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1498, in refresh
self._expire_state(state, attribute_names)
  File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1600, in _expire_state
self._validate_persistent(state)
  File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 2042, in _validate_persistent
state_str(state))
sqlalchemy.exc.InvalidRequestError: Instance '' is 
not persistent within this Session

I believe something merged after Feb. 9th breaks them. As in [1], no
codes changes from Feb. 9th but it fails on recheck on Feb. 20th.


[1] https://review.openstack.org/#/c/537320/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750735

Title:
  [OVO] UT fails when setting new_facade to True

Status in neutron:
  New

Bug description:
  How to reproduce:
  1) Set new_facade = True in any OVO object. I tried PortBinding(), Port() and 
Network().
  2) Run python -m testtools.run neutron/tests/unit/objects/test_network.py
 or python -m testtools.run neutron/tests/unit/objects/test_port.py
  3) Example of failures:
  ==
  ERROR: 
neutron.tests.unit.objects.test_network.NetworkObjectIfaceTestCase.test_update_updates_from_db_object
  --
  Traceback (most recent call last):
File "neutron/tests/base.py", line 132, in func
  return f(self, *args, **kwargs)
File "neutron/tests/base.py", line 132, in func
  return f(self, *args, **kwargs)
File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  return func(*args, **keywargs)
File "neutron/tests/unit/objects/test_base.py", line 1167, in 
test_update_updates_from_db_object
  obj.update()
File "neutron/objects/base.py", line 319, in decorator
  self.obj_context.session.refresh(self.db_obj)
File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1498, in refresh
  self._expire_state(state, attribute_names)
File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1600, in _expire_state
  self._validate_persistent(state)
File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 2042, in _validate_persistent
  state_str(state))
  sqlalchemy.exc.InvalidRequestError: Instance '' is 
not persistent within this Session

  I believe something merged after Feb. 9th breaks them. As in [1], no
  codes changes from Feb. 9th but it fails on recheck on Feb. 20th.

  
  [1] https://review.openstack.org/#/c/537320/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp