[Yahoo-eng-team] [Bug 1829120] [NEW] Keystone Install Guide is providing wrong dependency on apt install keystone

2019-05-14 Thread Natthasak Vechprasit
Public bug reported:

https://docs.openstack.org/keystone/stein/install/keystone-install-
ubuntu.html

ERROR => apt install keystone  apache2 libapache2-mod-wsgi

Console Output:
apt install keystone apache2 libapache2-mod-wsgi
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 keystone : Depends: libapache2-mod-wsgi-py3 but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

SUCCESSFUL COMMAND => apt install keystone apache2 libapache2-mod-wsgi-
py3

This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [X] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [X] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release:  on 2018-12-10 15:55:13
SHA: 1828d0612cf2c51427773077dc25bd8b659eb549
Source: 
https://opendev.org/openstack/keystone/src/doc/source/install/keystone-install-ubuntu.rst
URL: 
https://docs.openstack.org/keystone/stein/install/keystone-install-ubuntu.html

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1829120

Title:
  Keystone Install Guide is providing wrong dependency on apt install
  keystone

Status in OpenStack Identity (keystone):
  New

Bug description:
  https://docs.openstack.org/keystone/stein/install/keystone-install-
  ubuntu.html

  ERROR => apt install keystone  apache2 libapache2-mod-wsgi

  Console Output:
  apt install keystone apache2 libapache2-mod-wsgi
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  Some packages could not be installed. This may mean that you have
  requested an impossible situation or if you are using the unstable
  distribution that some required packages have not yet been created
  or been moved out of Incoming.
  The following information may help to resolve the situation:

  The following packages have unmet dependencies:
   keystone : Depends: libapache2-mod-wsgi-py3 but it is not going to be 
installed
  E: Unable to correct problems, you have held broken packages.

  SUCCESSFUL COMMAND => apt install keystone apache2 libapache2-mod-
  wsgi-py3

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [X] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2018-12-10 15:55:13
  SHA: 1828d0612cf2c51427773077dc25bd8b659eb549
  Source: 
https://opendev.org/openstack/keystone/src/doc/source/install/keystone-install-ubuntu.rst
  URL: 
https://docs.openstack.org/keystone/stein/install/keystone-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1829120/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818696] Re: frequent ci failures trying to delete qos port

2019-05-14 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818696

Title:
  frequent ci failures trying to delete qos port

Status in neutron:
  Expired

Bug description:
  Lots of this error:
  RuntimeError: OVSDB Error: {"details":"cannot delete QoS row 
03bc0e7a-bd4e-42a7-95e1-493fce7d6342 because of 1 remaining 
reference(s)","error":"referential integrity violation"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818671] Re: Openstack usage list not showing all projects

2019-05-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818671

Title:
  Openstack usage list not showing all projects

Status in OpenStack Compute (nova):
  Expired

Bug description:
  In a customer environment running nova 2:17.0.5-0ubuntu1~cloud0

  when querying projects usage list most recent projects are not listed
  in the reply.

  Example:

  $ openstack  usage list --print-empty --start 2019-01-01 --end
  2019-02-01

  Not showing any information about project
  a897ea83f01c436e82e13a4306fa5ef0

  But querying for the usage of the specific project we can retrieve the
  results:

  openstack  usage show --project a897ea83f01c436e82e13a4306fa5ef0  --start 
2019-01-01 --end 2019-02-01 
  Usage from 2019-01-01 to 2019-02-01 on project 
a897ea83f01c436e82e13a4306fa5ef0: 
  +---++
  | Field | Value  |
  +---++
  | CPU Hours | 528.3  |
  | Disk GB-Hours | 10566.07   |
  | RAM MB-Hours  | 2163930.45 |
  | Servers   | 43 |
  +---++

  As a workaround we are able to get projects_uuid like this:
  projects_uuid=$(openstack project list | grep -v ID | awk '{print $2}')

  And iterate over them and get individuals usage:

  for prog in $projects_uuid; do openstack project show $prog; openstack
  usage show --project $prog  --start 2019-01-01 --end 2019-02-01; done

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1801303] Re: VM send DHCP request before openflow is created in br-int

2019-05-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1801303

Title:
  VM send DHCP request before openflow is created in br-int

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I am using OpenStack queens set up by OpenStack-Ansible.
  When I create a VM, it seems to be started and sends DHCP request just before 
openflow is created.
  This will result in a two minutes delay for VM to be accessible.
  Test is carried out with Cirros image, as pasted below, VM can't get IP in 
the first round "sending discover".
  Is there a way to add some delay when booting a VM ? Nova will not check 
openflow before starting the VM ?

  info: initramfs: up at 0.64GROWROOT: CHANGED: partition=1 start=16065 old: 
size=64260 end=80325 new: size=2072385,end=2088450info: initramfs loading root 
from /dev/vda1
  info: /etc/init.d/rc.sysinit: up at 0.70info: container: noneStarting 
logging: OKmodprobe: module virtio_blk not found in modules.dep
  modprobe: module virtio_net not found in modules.dep
  WARN: /etc/rc3.d/S10-load-modules failed
  Initializing random number generator... done.
  Starting acpid: OK
  cirros-ds 'local' up at 0.76
  no results found for mode=local. up 0.78. searched: nocloud configdrive ec2
  Starting network...
  udhcpc (v1.20.1) started
  Sending discover...
  Sending discover...
  Sending select for 192.168.11.113...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1801303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818960] Re: IPv6 not working with iptables

2019-05-14 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818960

Title:
  IPv6 not working with iptables

Status in neutron:
  Expired

Bug description:
  Hi,

  Running rocky on Ubuntu 18.04 deployed by juju, using ML2, ovs,
  iptables. IPv6 appears to be broken because of missing MARK-related
  rules in the qrouter netns.

  The iptables and ip6tables rules generated by neutron are
  https://pastebin.ubuntu.com/p/S32TQcmTzX/

  For egress (traffic leaving an instance) to work, the following additional 
rule is needed :
  sudo ip6tables -t mangle -I neutron-l3-agent-POSTROUTING -o qg-45ba891c-4c -m 
connmark --mark 0x0/0x -j CONNMARK --save-mark --nfmask 0x 
--ctmask 0x

  The following patch should fix the problem :
  https://pastebin.ubuntu.com/p/RpbYBjCVnp/ (sorry, I don't have time
  right now to update the tests for a proper merge request)

  
  For ingress, the following is needed :
  sudo ip6tables -t mangle -A neutron-l3-agent-scope -i qg-45ba891c-4c -j MARK 
--set-xmark 0x400/0x

  Haven't had the time to dig out in the code where exactly the bug is.

  
  Is IPv6 working for anyone with this setup ? Are these commands the right fix 
? (I'm just mimicking what IPv4 does)

  I've looked at unit tests for my patch above, and IPv6 testing is
  extremely limited.

  My IPv6 subnet got created with :
  $ openstack subnet create --network net_instances --ip-version 6 
--ipv6-address-mode=slaac --ipv6-ra-mode=slaac --allocation-pool 
start=,end= --subnet-range ::/64 --gateway 
 subnet_instances_v6

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829102] [NEW] If there is a host in the aggregates, when the aggregates is deleted, an error was shown.

2019-05-14 Thread pengyuesheng
Public bug reported:

If there is a host in the aggregates, when the aggregates is deleted, an
error was shown.

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => pengyuesheng (pengyuesheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1829102

Title:
  If there is a host in the aggregates, when the aggregates is deleted,
  an error was shown.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If there is a host in the aggregates, when the aggregates is deleted,
  an error was shown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1829102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1827628] Re: Cannot model affinity (and/or anti) with placement "limits" parameter

2019-05-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/658110
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2f9e972ba3358fc5bc9bdc06faf47b21d509e20f
Submitter: Zuul
Branch:master

commit 2f9e972ba3358fc5bc9bdc06faf47b21d509e20f
Author: Surya Seetharaman 
Date:   Thu May 9 16:22:13 2019 +0200

Disable limit if affinity(anti)/same(different)host is requested

When max_placement_results is less than the total number of nodes in
a deployment it may not be possible to use the affinity, anti-affinity,
same host or different host filters as there is no guarantee
for placement to return the expected hosts under such situations. This
patch disables the max_placement_results parameter when nova queries
placement for ``GET /allocation_candidates`` if the
request_spec.scheduler_hints containts any of group, same_host or
different_host keys.

Change-Id: Ia2d5f80b6db59a8f6da03344aeaa6aa599407672
Closes-Bug: #1827628


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1827628

Title:
  Cannot model affinity (and/or anti) with placement "limits" parameter

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  It is currently not possible to use affinity/anti-affinity with the
  placement "limits" parameter since if you want your instance to land
  on a node where another instance lives you cannot rely on what
  placement GET/allocation_candidates would return and this could result
  in a no valid host. Current workaround is to unset the limits
  parameter.

  We already have the same-ish problem for disabled computes
  (https://bugs.launchpad.net/nova/+bug/1805984). We could just do a
  more generic solution that fits both these cases.

  The same problem is also applicable to
  
https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#samehostfilter
  and
  
https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#differenthostfilter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1827628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828966] Re: tox doesn't catch invalid interpreter

2019-05-14 Thread Matt Riedemann
You need to upgrade the version of tox you're using, it was a regression
in some version of tox - this isn't a nova issue.

https://tox.readthedocs.io/en/latest/changelog.html

I can't remember off hand which version but I'm using 3.8.0 and don't
have this problem.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1828966

Title:
  tox doesn't catch invalid interpreter

Status in Cinder:
  New
Status in Glance:
  New
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When you pass invalid interpreter to tox command it installs
  dependencies, requirements in virtual environment and exits without
  running any tests. Actually it should throw error, Interpreter Not
  Found.

  tox -e py23
  py23 create: /opt/stack/glance/.tox/py23
  py23 installdeps: -r/opt/stack/glance/test-requirements.txt
  py23 develop-inst: /opt/stack/glance
  py23 installed: 
alabaster==0.7.12,alembic==1.0.10,amqp==2.4.2,appdirs==1.4.3,asn1crypto==0.24.0,automaton==1.16.0,Babel==2.6.0,cachetools==3.1.0,castellan==1.2.2,certifi==2019.3.9,cffi==1.12.3,chardet==3.0.4,cliff==2.14.1,cmd2==0.8.9,coverage==4.5.3,cryptography==2.6.1,cursive==0.2.2,ddt==1.2.1,debtcollector==1.21.0,decorator==4.4.0,defusedxml==0.6.0,dnspython==1.15.0,doc8==0.8.0,docutils==0.14,dogpile.cache==0.7.1,eventlet==0.24.1,extras==1.0.0,fasteners==0.14.1,fixtures==3.0.0,flake8==2.5.5,future==0.17.1,futurist==1.8.1,-e
 
git+https://git.openstack.org/openstack/glance.git@18e71c8e759aa4031da6258bff519ae206145fe6#egg=glance,glance-store==0.28.0,greenlet==0.4.15,hacking==0.12.0,httplib2==0.12.3,idna==2.8,imagesize==1.1.0,iso8601==0.1.12,Jinja2==2.10.1,jmespath==0.9.4,jsonpatch==1.23,jsonpointer==2.0,jsonschema==2.6.0,keystoneauth1==3.14.0,keystonemiddleware==6.0.0,kombu==4.5.0,linecache2==1.0.0,Mako==1.0.9,MarkupSafe==1.1.1,mccabe==0.2.1,mock==3.0.4,monotonic==1.5,mox3==0.27.0,msgpack==0.6.1,munch==2.3.2,netaddr==0.7.19,netifaces==0.10.9,networkx==2.3,openstacksdk==0.27.0,os-client-config==1.32.0,os-service-types==1.7.0,os-win==4.2.0,oslo.cache==1.34.0,oslo.concurrency==3.29.1,oslo.config==6.9.0,oslo.context==2.22.1,oslo.db==4.46.0,oslo.i18n==3.23.1,oslo.log==3.43.0,oslo.messaging==9.6.0,oslo.middleware==3.38.0,oslo.policy==2.2.0,oslo.serialization==2.29.0,oslo.service==1.38.0,oslo.upgradecheck==0.2.1,oslo.utils==3.41.0,oslotest==3.7.1,osprofiler==2.7.0,packaging==19.0,Paste==3.0.8,PasteDeploy==2.0.1,pbr==5.2.0,pep8==1.5.7,prettytable==0.7.2,psutil==5.6.2,psycopg2==2.8.2,pycadf==2.9.0,pycparser==2.19,pydot==1.4.1,pyflakes==0.8.1,Pygments==2.4.0,pyinotify==0.9.6,PyMySQL==0.9.3,pyOpenSSL==19.0.0,pyparsing==2.4.0,pyperclip==1.7.0,pysendfile==2.0.1,python-barbicanclient==4.8.1,python-dateutil==2.8.0,python-editor==1.0.4,python-keystoneclient==3.19.0,python-mimeparse==1.6.0,python-subunit==1.3.0,python-swiftclient==3.7.0,pytz==2019.1,PyYAML==5.1,repoze.lru==0.7,requests==2.21.0,requestsexceptions==1.4.0,restructuredtext-lint==1.3.0,retrying==1.3.3,rfc3986==1.3.1,Routes==2.4.1,simplegeneric==0.8.1,six==1.12.0,snowballstemmer==1.2.1,Sphinx==2.0.1,sphinxcontrib-applehelp==1.0.1,sphinxcontrib-devhelp==1.0.1,sphinxcontrib-htmlhelp==1.0.2,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.2,sphinxcontrib-serializinghtml==1.1.3,SQLAlchemy==1.2.18,sqlalchemy-migrate==0.12.0,sqlparse==0.3.0,statsd==3.3.0,stestr==2.3.1,stevedore==1.30.1,taskflow==3.4.0,Tempita==0.5.2,tenacity==5.0.4,testrepository==0.0.20,testresources==2.0.1,testscenarios==0.5.0,testtools==2.3.0,traceback2==1.4.0,unittest2==1.1.0,urllib3==1.24.3,vine==1.3.0,voluptuous==0.11.5,wcwidth==0.1.7,WebOb==1.8.5,wrapt==1.11.1,WSME==0.9.3,xattr==0.9.6,yappi==1.0
  py23 run-test-pre: PYTHONHASHSEED='1359514857'
  py23 runtests: commands[0] | find . -type f -name '*.pyc' -delete
   summary 
_
py23: commands succeeded
congratulations :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1828966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829062] Re: nova placement api non-responsive due to eventlet error

2019-05-14 Thread Ghada Khalil
** Also affects: starlingx
   Importance: Undecided
   Status: New

** Tags added: stx.distro.openstack

** Changed in: starlingx
   Importance: Undecided => Critical

** Tags added: stx.2.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829062

Title:
  nova placement api non-responsive due to eventlet error

Status in OpenStack Compute (nova):
  New
Status in StarlingX:
  New

Bug description:
  In starlingx setup, we're running a nova docker image based on nova 
stable/stein as of May 6.
  We're seeing nova-compute processes stalling and not creating resource 
providers with placement.
  openstack hypervisor list
  ++-+-+-+---+
  | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
  ++-+-+-+---+
  | 5  | worker-1| QEMU| 192.168.206.247 | down  |
  | 8  | worker-2| QEMU| 192.168.206.211 | down  |
  ++-+-+-+---+

  Observe this error in nova-placement-api logs related to eventlet at same 
time:
  2019-05-14 00:44:03.636229 Traceback (most recent call last):
  2019-05-14 00:44:03.636276 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 
460, in fire_timers
  2019-05-14 00:44:03.636536 timer()
  2019-05-14 00:44:03.636560 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 
59, in _call_
  2019-05-14 00:44:03.636647 cb(*args, **kw)
  2019-05-14 00:44:03.636661 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 
147, in _do_acquire
  2019-05-14 00:44:03.636774 waiter.switch()
  2019-05-14 00:44:03.636792 error: cannot switch to a different thread

  This is a new behaviour for us in stable/stein and suspect this is due to 
merge of eventlet related change on May 4:
  
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825882] Re: Virsh disk attach errors silently ignored

2019-05-14 Thread Jorge Niedbalski
** Also affects: nova (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1825882

Title:
  Virsh disk attach errors silently ignored

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in nova package in Ubuntu:
  New
Status in nova source package in Bionic:
  New
Status in nova source package in Cosmic:
  New
Status in nova source package in Disco:
  New

Bug description:
  Description
  ===
  The following commit (1) is causing volume attachments which fail due to 
libvirt device attach erros to be silently ignored and Nova report the 
attachment as successful.

  It seems that the original intention of the commit was to log a
  condition and re-raise the exeption, but if the exception is of type
  libvirt.libvirtError and does not contain the searched pattern, the
  exception is ignored. If you unindent the raise statement, errors are
  reported again.

  In our case we had ceph/apparmor configuration problems in compute
  nodes which prevented virsh attaching the device; volumes appeared as
  successfully attached but the corresponding block device missing in
  guests VMs. Other libvirt attach error conditions are ignored also, as
  when you have already occuppied device names (i.e. 'Target vdb already
  exists', device is busy, etc.)

  (1)
  
https://github.com/openstack/nova/commit/78891c2305bff6e16706339a9c5eca99a84e409c

  Steps to reproduce
  ==
  This is somewhat hacky, but is a quick way to provoke a virsh attach error:
  - virsh detach-disk  vdb
  - update nova & cinder DB as if volume is detached
  - re-attach volume
  - volume is marked as attached, but VM block device is missing

  Expected result
  ===
  - Error 'libvirtError: Requested operation is not valid: target vdb already 
exists' should be raised, and volume not attached

  Actual result
  =
  - Attach successful but virsh block device not created

  Environment
  ===
  - Openstack version Queens

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1825882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825295] Re: The openflow won't be deleted when we reboot and migrate the instance

2019-05-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/653668
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=82782d37639fab97e445e4bbb4daeb85dc829fcd
Submitter: Zuul
Branch:master

commit 82782d37639fab97e445e4bbb4daeb85dc829fcd
Author: Yang Li 
Date:   Thu Apr 18 14:45:31 2019 +0800

Make sure the port still in port map when prepare_port_filter

The current code will remove the port from sg_port_map, but then it
won't be added into the map, when we resize/migrate this instance,
the related openflow won't be deleted, this will cause vm connectivity
problem.

Closes-Bug: #1825295
Change-Id: I94da3c1960d43893c7a367a81279d429e469


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1825295

Title:
  The openflow won't be deleted when we reboot and migrate the instance

Status in neutron:
  Fix Released

Bug description:
  Sometimes user will do some steps for instance management:
  1. Restart hostA neutron-openvswitch-agent
  2. Reboot the instance which in hostA
  3. Migrate the instance from hostA to hostB or resize the instance(this also 
cause migration)

  Then we will find the instance's openflows on the hostA still exist,
  this will cause instance connectivity problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1825295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826523] Re: libvirtError exceptions during volume attach leave volume connected to host

2019-05-14 Thread Jorge Niedbalski
** Also affects: nova (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1826523

Title:
  libvirtError exceptions during volume attach leave volume connected to
  host

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in nova package in Ubuntu:
  New
Status in nova source package in Bionic:
  New
Status in nova source package in Cosmic:
  New
Status in nova source package in Disco:
  New

Bug description:
  Description
  ===

  In addition to bug #1825882 where libvirtError exceptions are not
  raised correctly when attaching volumes to domains the underlying
  volumes are not disconnected from the host.

  Steps to reproduce
  ==

  - virsh detach-disk  vdb
  - update nova & cinder DB as if volume is detached
  - re-attach volume

  Expected result
  ===
  Volume attach fails and the volume is disconnected from the host.

  Actual result
  =
  volume attach fails but remains connected to the host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 master to stable/queens

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt + QEMU/KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1826523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829062] [NEW] nova placement api non-responsive due to eventlet error

2019-05-14 Thread Gerry Kopec
Public bug reported:

In starlingx setup, we're running a nova docker image based on nova 
stable/stein as of May 6.
We're seeing nova-compute processes stalling and not creating resource 
providers with placement.
openstack hypervisor list
++-+-+-+---+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
++-+-+-+---+
| 5  | worker-1| QEMU| 192.168.206.247 | down  |
| 8  | worker-2| QEMU| 192.168.206.211 | down  |
++-+-+-+---+

Observe this error in nova-placement-api logs related to eventlet at same time:
2019-05-14 00:44:03.636229 Traceback (most recent call last):
2019-05-14 00:44:03.636276 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 
460, in fire_timers
2019-05-14 00:44:03.636536 timer()
2019-05-14 00:44:03.636560 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 
59, in _call_
2019-05-14 00:44:03.636647 cb(*args, **kw)
2019-05-14 00:44:03.636661 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 
147, in _do_acquire
2019-05-14 00:44:03.636774 waiter.switch()
2019-05-14 00:44:03.636792 error: cannot switch to a different thread

This is a new behaviour for us in stable/stein and suspect this is due to merge 
of eventlet related change on May 4:
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829062

Title:
  nova placement api non-responsive due to eventlet error

Status in OpenStack Compute (nova):
  New

Bug description:
  In starlingx setup, we're running a nova docker image based on nova 
stable/stein as of May 6.
  We're seeing nova-compute processes stalling and not creating resource 
providers with placement.
  openstack hypervisor list
  ++-+-+-+---+
  | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
  ++-+-+-+---+
  | 5  | worker-1| QEMU| 192.168.206.247 | down  |
  | 8  | worker-2| QEMU| 192.168.206.211 | down  |
  ++-+-+-+---+

  Observe this error in nova-placement-api logs related to eventlet at same 
time:
  2019-05-14 00:44:03.636229 Traceback (most recent call last):
  2019-05-14 00:44:03.636276 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 
460, in fire_timers
  2019-05-14 00:44:03.636536 timer()
  2019-05-14 00:44:03.636560 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 
59, in _call_
  2019-05-14 00:44:03.636647 cb(*args, **kw)
  2019-05-14 00:44:03.636661 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 
147, in _do_acquire
  2019-05-14 00:44:03.636774 waiter.switch()
  2019-05-14 00:44:03.636792 error: cannot switch to a different thread

  This is a new behaviour for us in stable/stein and suspect this is due to 
merge of eventlet related change on May 4:
  
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826523] Re: libvirtError exceptions during volume attach leave volume connected to host

2019-05-14 Thread Jorge Niedbalski
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1826523

Title:
  libvirtError exceptions during volume attach leave volume connected to
  host

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in nova package in Ubuntu:
  New

Bug description:
  Description
  ===

  In addition to bug #1825882 where libvirtError exceptions are not
  raised correctly when attaching volumes to domains the underlying
  volumes are not disconnected from the host.

  Steps to reproduce
  ==

  - virsh detach-disk  vdb
  - update nova & cinder DB as if volume is detached
  - re-attach volume

  Expected result
  ===
  Volume attach fails and the volume is disconnected from the host.

  Actual result
  =
  volume attach fails but remains connected to the host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 master to stable/queens

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt + QEMU/KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1826523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825882] Re: Virsh disk attach errors silently ignored

2019-05-14 Thread Jorge Niedbalski
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1825882

Title:
  Virsh disk attach errors silently ignored

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in nova package in Ubuntu:
  New

Bug description:
  Description
  ===
  The following commit (1) is causing volume attachments which fail due to 
libvirt device attach erros to be silently ignored and Nova report the 
attachment as successful.

  It seems that the original intention of the commit was to log a
  condition and re-raise the exeption, but if the exception is of type
  libvirt.libvirtError and does not contain the searched pattern, the
  exception is ignored. If you unindent the raise statement, errors are
  reported again.

  In our case we had ceph/apparmor configuration problems in compute
  nodes which prevented virsh attaching the device; volumes appeared as
  successfully attached but the corresponding block device missing in
  guests VMs. Other libvirt attach error conditions are ignored also, as
  when you have already occuppied device names (i.e. 'Target vdb already
  exists', device is busy, etc.)

  (1)
  
https://github.com/openstack/nova/commit/78891c2305bff6e16706339a9c5eca99a84e409c

  Steps to reproduce
  ==
  This is somewhat hacky, but is a quick way to provoke a virsh attach error:
  - virsh detach-disk  vdb
  - update nova & cinder DB as if volume is detached
  - re-attach volume
  - volume is marked as attached, but VM block device is missing

  Expected result
  ===
  - Error 'libvirtError: Requested operation is not valid: target vdb already 
exists' should be raised, and volume not attached

  Actual result
  =
  - Attach successful but virsh block device not created

  Environment
  ===
  - Openstack version Queens

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1825882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829042] [NEW] Some API requests (GET networks) fail with "Accept: application/json; charset=utf-8" header and WebOb>=1.8.0

2019-05-14 Thread Bernard Cafarelli
Public bug reported:

Original downstream bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1706222

On versions newer than Rocky, we have WebOb 1.8 in requirements. This causes 
the following API calls to end with 500 error:
GET http://localhost:9696/v2.0/ports
GET http://localhost:9696/v2.0/subnets
GET http://localhost:9696/v2.0/networks

when setting an Accept header with charset like "Accept:
application/json; charset=utf-8"

These calls do not go through neutron.api.v2 and wsgi.request as other
resources, is it something that should be fixed too?

To reproduce (on master too):
$ curl -s -H "Accept: application/json; charset=utf-8" -H "X-Auth-Token: 
$OS_TOKEN" "http://localhost:9696/v2.0/ports"; | python -mjson.tool
{
"NeutronError": {
"detail": "",
"message": "The server could not comply with the request since it is 
either malformed or otherwise incorrect.",
"type": "HTTPNotAcceptable"
}
}

mai 14 18:16:19 devstack neutron-server[1519]: DEBUG neutron.wsgi [-] (1533) 
accepted ('127.0.0.1', 47790) {{(pid=1533) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:956}}
mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] content type None
mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] Controller 'index' defined 
does not support content_type 'None'. Supported type(s): ['application/json']
mai 14 18:16:19 devstack neutron-server[1519]: INFO 
neutron.pecan_wsgi.hooks.translation [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] GET failed (client error): 
The server could not comply with the request since it is either malformed or 
otherwise incorrect.
mai 14 18:16:19 devstack neutron-server[1519]: INFO neutron.wsgi [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] 127.0.0.1 "GET /v2.0/ports 
HTTP/1.1" status: 406  len: 360 time: 0.2243972

Relevant WebOb warning:
https://github.com/Pylons/webob/blob/master/docs/whatsnew-1.8.txt#L24

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1829042

Title:
  Some API requests (GET networks) fail with "Accept: application/json;
  charset=utf-8" header and WebOb>=1.8.0

Status in neutron:
  New

Bug description:
  Original downstream bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1706222

  On versions newer than Rocky, we have WebOb 1.8 in requirements. This causes 
the following API calls to end with 500 error:
  GET http://localhost:9696/v2.0/ports
  GET http://localhost:9696/v2.0/subnets
  GET http://localhost:9696/v2.0/networks

  when setting an Accept header with charset like "Accept:
  application/json; charset=utf-8"

  These calls do not go through neutron.api.v2 and wsgi.request as other
  resources, is it something that should be fixed too?

  To reproduce (on master too):
  $ curl -s -H "Accept: application/json; charset=utf-8" -H "X-Auth-Token: 
$OS_TOKEN" "http://localhost:9696/v2.0/ports"; | python -mjson.tool
  {
  "NeutronError": {
  "detail": "",
  "message": "The server could not comply with the request since it is 
either malformed or otherwise incorrect.",
  "type": "HTTPNotAcceptable"
  }
  }

  mai 14 18:16:19 devstack neutron-server[1519]: DEBUG neutron.wsgi [-] (1533) 
accepted ('127.0.0.1', 47790) {{(pid=1533) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:956}}
  mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] content type None
  mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] Controller 'index' defined 
does not support content_type 'None'. Supported type(s): ['application/json']
  mai 14 18:16:19 devstack neutron-server[1519]: INFO 
neutron.pecan_wsgi.hooks.translation [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] GET failed (client error): 
The server could not comply with the request since it is either malformed or 
otherwise incorrect.
  mai 14 18:16:19 devstack neutron-server[1519]: INFO neutron.wsgi [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] 127.0.0.1 "GET /v2.0/ports 
HTTP/1.1" status: 406  len: 360 time: 0.2243972

  Relevant WebOb warning:
  https://github.com/Pylons/webob/blob/master/docs/whatsnew-1.8.txt#L24

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1829042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829032] [NEW] placement user

2019-05-14 Thread Ricardo Alexandre Silveira
Public bug reported:

Creation of the Placement user and their permissions as well as
endpoints are not in this Stein documentation. This creates errors. I
ask you to add in the documentation.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829032

Title:
  placement user

Status in OpenStack Compute (nova):
  New

Bug description:
  Creation of the Placement user and their permissions as well as
  endpoints are not in this Stein documentation. This creates errors. I
  ask you to add in the documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821925] Re: Filter out failing tempest volume test on stable Ocata branch

2019-05-14 Thread Bernard Cafarelli
** Summary changed:

- Limit test coverage for Extended Maintenance stable branches
+ Filter out failing tempest volume test on stable Ocata branch

** Changed in: neutron
   Status: Incomplete => Fix Released

** Changed in: neutron
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1821925

Title:
  Filter out failing tempest volume test on stable Ocata branch

Status in neutron:
  Fix Released

Bug description:
  Per [0] "There is no statement about the level of testing and upgrades
  from Extended Maintenance are not supported within the Community."

  In Ocata (currently in EM) and Pike (soon to be) branches, we see Zuul
  check failures from time to time on unstable tests, that require a few
  rechecks before the backport gets in.

  For some issues it is better to fix the test/setup itself when it is
  easy (see [1] and [2] for recent examples), but for some failing tests
  (testing exotic cases or not directly related to networking), we
  should start filtering them out.

  An initial example is 
tempest.api.volume.test_volumes_extend.VolumesExtendTest.test_volume_extend_when_volume_has_snapshot
 which fails regularly on ocata and often on pike:
  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22test_volume_extend_when_volume_has_snapshot%5C%22%20AND%20project:%5C%22openstack%2Fneutron%5C%22%20AND%20build_status:%5C%22FAILURE%5C%22

  We can use this bug to track later similar additions

  [0] 
https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance
  [1] https://bugs.launchpad.net/neutron/+bug/1821815 fixes cover jobs
  [2] https://review.openstack.org/#/c/648046/ fixes a functional test failure 
(simple backport)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1821925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829000] [NEW] live migration (block-migrate) may failed if instance image is deleted in glance

2019-05-14 Thread Alexandre arents
Public bug reported:

Description
===
When we run live block migration on instance with a deleted glance image,
it may failed with following logs:

-- nova-compute-log: --
2019-05-10 11:06:27.417 248758 ERROR nova.virt.libvirt.driver 
[req-b28b9aca-9135-4258-93a6-a802e6192c60 f7929cd1d8994661b88aff12977c8b9e 
54f4d231201b4944a5fa4587a09bda28 - - -] [instance: 
84601bd4-a6ee-4e00-a5bc-f7c80def7ec5] Migration operation has aborted
2019-05-10 11:06:27.566 248758 ERROR nova.virt.libvirt.driver 
[req-b28b9aca-9135-4258-93a6-a802e6192c60 f7929cd1d8994661b88aff12977c8b9e 
54f4d231201b4944a5fa4587a09bda28 - - -] [instance: 
84601bd4-a6ee-4e00-a5bc-f7c80def7ec5] Live Migration failure: internal error: 
info migration reply was missing return status

-- on target host /var/log/libvirt/qemu/instance-xxx.log: --
/build/qemu-nffE1h/qemu-2.x/nbd.c:nbd_trip():L1310: From: 2416967680, Len: 
65536, Size: 2361393152, Offset: 0
/build/qemu-nffE1h/qemu-2.x/nbd.c:nbd_trip():L1311: requested operation past 
EOF--bad client?
/build/qemu-nffE1h/qemu-2.x/nbd.c:nbd_trip():L1310: From: 3624927232, Len: 
589824, Size: 2361393152, Offset: 0

Seems that pre_live_migration task do not setup correctly target instance disk:
-because glance image is not existant, it fallbacks to remote host copy method.
-in this context, image.cache() is called without instance disk size parameter.
-consequence is instance disk is not resized to the correct size and remain 
with the size of backing file, so the disk is too small, making failed libvirt 
live migration.


Steps to reproduce
==
* Spawn qcow2 instance with glance image size << of flavor disk instance size
* generate few user data in instance.
* delete glance image.
* run live block migration.

Environment
===
Issue observed in Newton, still present in master.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829000

Title:
  live migration (block-migrate) may failed if instance image is deleted
  in glance

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When we run live block migration on instance with a deleted glance image,
  it may failed with following logs:

  -- nova-compute-log: --
  2019-05-10 11:06:27.417 248758 ERROR nova.virt.libvirt.driver 
[req-b28b9aca-9135-4258-93a6-a802e6192c60 f7929cd1d8994661b88aff12977c8b9e 
54f4d231201b4944a5fa4587a09bda28 - - -] [instance: 
84601bd4-a6ee-4e00-a5bc-f7c80def7ec5] Migration operation has aborted
  2019-05-10 11:06:27.566 248758 ERROR nova.virt.libvirt.driver 
[req-b28b9aca-9135-4258-93a6-a802e6192c60 f7929cd1d8994661b88aff12977c8b9e 
54f4d231201b4944a5fa4587a09bda28 - - -] [instance: 
84601bd4-a6ee-4e00-a5bc-f7c80def7ec5] Live Migration failure: internal error: 
info migration reply was missing return status

  -- on target host /var/log/libvirt/qemu/instance-xxx.log: --
  /build/qemu-nffE1h/qemu-2.x/nbd.c:nbd_trip():L1310: From: 2416967680, Len: 
65536, Size: 2361393152, Offset: 0
  /build/qemu-nffE1h/qemu-2.x/nbd.c:nbd_trip():L1311: requested operation past 
EOF--bad client?
  /build/qemu-nffE1h/qemu-2.x/nbd.c:nbd_trip():L1310: From: 3624927232, Len: 
589824, Size: 2361393152, Offset: 0

  Seems that pre_live_migration task do not setup correctly target instance 
disk:
  -because glance image is not existant, it fallbacks to remote host copy 
method.
  -in this context, image.cache() is called without instance disk size 
parameter.
  -consequence is instance disk is not resized to the correct size and remain 
with the size of backing file, so the disk is too small, making failed libvirt 
live migration.

  
  Steps to reproduce
  ==
  * Spawn qcow2 instance with glance image size << of flavor disk instance size
  * generate few user data in instance.
  * delete glance image.
  * run live block migration.

  Environment
  ===
  Issue observed in Newton, still present in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828862] Re: Listing servers with the "--all-tenants" and "--deleted" flag fails due to a bad marker

2019-05-14 Thread Surya Seetharaman
*** This bug is a duplicate of bug 1825034 ***
https://bugs.launchpad.net/bugs/1825034

yikes, looks like this is just a duplicate of
https://bugs.launchpad.net/nova/+bug/1825034 and I just ran along
someone else's course then. Marking this as a duplicate.

** This bug has been marked a duplicate of bug 1825034
   listing deleted servers from the API fails after running 
fill_virtual_interface_list online data migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1828862

Title:
  Listing servers with the "--all-tenants" and "--deleted" flag fails
  due to a bad marker

Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) stein series:
  New

Bug description:
  If "nova list --all-tenants --deleted" is run after the
  "virtual_interface_obj.fill_virtual_interface_list" migration that was
  added in Stein it will fail with a " (HTTP 500)" error because of
  encountering the markers - which are basically one deleted instance
  per cell with the fake all zeros uuid. This will be a problem until
  the archival is run I guess. Anyhow while admin listing this marker
  should not even show up under the list of deleted servers. I guess
  this should be filtered out in some way.

  I am also not sure if the operator is supposed to just archive the
  nuisance marker because it defeats the purpose of the persistent
  marker.

  Traceback
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: DEBUG 
nova.objects.instance [None req-df297b54-82b1-43ea-83c1-fac0f27705a9 admin 
admin] Lazy-loading 'flavor' on Instance uuid 
---- {{(pid=19555) obj_load_attr 
/opt/stack/nova/nova/objects/instance.py:1110}}
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi [None req-df297b54-82b1-43ea-83c1-fac0f27705a9 admin 
admin] Unexpected exception in API method: OrphanedObjectError: Cannot call 
obj_load_attr on orphaned Instance object
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi Traceback (most recent call last):
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi   File "/opt/stack/nova/nova/api/openstack/wsgi.py", 
line 671, in wrapped
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi return f(*args, **kwargs)
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 192, in wrapper
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi return func(*args, **kwargs)
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 192, in wrapper
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi return func(*args, **kwargs)
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 192, in wrapper
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi return func(*args, **kwargs)
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 136, in detail
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi servers = self._get_servers(req, is_detail=True)
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 330, in 
_get_servers
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi req, instance_list, 
cell_down_support=cell_down_support)
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 390, in 
detail
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi cell_down_support=cell_down_support)
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 425, in 
_list_view
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi for server in servers]
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 222, in show
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.openstack.wsgi show_extra_specs),
  May 13 16:41:51 surya001 devstack@n-api.service[19544]: ERROR 
nova.api.

[Yahoo-eng-team] [Bug 1828966] Re: tox doesn't catch invalid interpreter

2019-05-14 Thread Abhishek Kekane
For nova and glance it doesn't run the tests, but for cinder it executes
the tests as well. A virtual environment directory py23 is created under
.tox directory.

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1828966

Title:
  tox doesn't catch invalid interpreter

Status in Cinder:
  New
Status in Glance:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  When you pass invalid interpreter to tox command it installs
  dependencies, requirements in virtual environment and exits without
  running any tests. Actually it should throw error, Interpreter Not
  Found.

  tox -e py23
  py23 create: /opt/stack/glance/.tox/py23
  py23 installdeps: -r/opt/stack/glance/test-requirements.txt
  py23 develop-inst: /opt/stack/glance
  py23 installed: 
alabaster==0.7.12,alembic==1.0.10,amqp==2.4.2,appdirs==1.4.3,asn1crypto==0.24.0,automaton==1.16.0,Babel==2.6.0,cachetools==3.1.0,castellan==1.2.2,certifi==2019.3.9,cffi==1.12.3,chardet==3.0.4,cliff==2.14.1,cmd2==0.8.9,coverage==4.5.3,cryptography==2.6.1,cursive==0.2.2,ddt==1.2.1,debtcollector==1.21.0,decorator==4.4.0,defusedxml==0.6.0,dnspython==1.15.0,doc8==0.8.0,docutils==0.14,dogpile.cache==0.7.1,eventlet==0.24.1,extras==1.0.0,fasteners==0.14.1,fixtures==3.0.0,flake8==2.5.5,future==0.17.1,futurist==1.8.1,-e
 
git+https://git.openstack.org/openstack/glance.git@18e71c8e759aa4031da6258bff519ae206145fe6#egg=glance,glance-store==0.28.0,greenlet==0.4.15,hacking==0.12.0,httplib2==0.12.3,idna==2.8,imagesize==1.1.0,iso8601==0.1.12,Jinja2==2.10.1,jmespath==0.9.4,jsonpatch==1.23,jsonpointer==2.0,jsonschema==2.6.0,keystoneauth1==3.14.0,keystonemiddleware==6.0.0,kombu==4.5.0,linecache2==1.0.0,Mako==1.0.9,MarkupSafe==1.1.1,mccabe==0.2.1,mock==3.0.4,monotonic==1.5,mox3==0.27.0,msgpack==0.6.1,munch==2.3.2,netaddr==0.7.19,netifaces==0.10.9,networkx==2.3,openstacksdk==0.27.0,os-client-config==1.32.0,os-service-types==1.7.0,os-win==4.2.0,oslo.cache==1.34.0,oslo.concurrency==3.29.1,oslo.config==6.9.0,oslo.context==2.22.1,oslo.db==4.46.0,oslo.i18n==3.23.1,oslo.log==3.43.0,oslo.messaging==9.6.0,oslo.middleware==3.38.0,oslo.policy==2.2.0,oslo.serialization==2.29.0,oslo.service==1.38.0,oslo.upgradecheck==0.2.1,oslo.utils==3.41.0,oslotest==3.7.1,osprofiler==2.7.0,packaging==19.0,Paste==3.0.8,PasteDeploy==2.0.1,pbr==5.2.0,pep8==1.5.7,prettytable==0.7.2,psutil==5.6.2,psycopg2==2.8.2,pycadf==2.9.0,pycparser==2.19,pydot==1.4.1,pyflakes==0.8.1,Pygments==2.4.0,pyinotify==0.9.6,PyMySQL==0.9.3,pyOpenSSL==19.0.0,pyparsing==2.4.0,pyperclip==1.7.0,pysendfile==2.0.1,python-barbicanclient==4.8.1,python-dateutil==2.8.0,python-editor==1.0.4,python-keystoneclient==3.19.0,python-mimeparse==1.6.0,python-subunit==1.3.0,python-swiftclient==3.7.0,pytz==2019.1,PyYAML==5.1,repoze.lru==0.7,requests==2.21.0,requestsexceptions==1.4.0,restructuredtext-lint==1.3.0,retrying==1.3.3,rfc3986==1.3.1,Routes==2.4.1,simplegeneric==0.8.1,six==1.12.0,snowballstemmer==1.2.1,Sphinx==2.0.1,sphinxcontrib-applehelp==1.0.1,sphinxcontrib-devhelp==1.0.1,sphinxcontrib-htmlhelp==1.0.2,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.2,sphinxcontrib-serializinghtml==1.1.3,SQLAlchemy==1.2.18,sqlalchemy-migrate==0.12.0,sqlparse==0.3.0,statsd==3.3.0,stestr==2.3.1,stevedore==1.30.1,taskflow==3.4.0,Tempita==0.5.2,tenacity==5.0.4,testrepository==0.0.20,testresources==2.0.1,testscenarios==0.5.0,testtools==2.3.0,traceback2==1.4.0,unittest2==1.1.0,urllib3==1.24.3,vine==1.3.0,voluptuous==0.11.5,wcwidth==0.1.7,WebOb==1.8.5,wrapt==1.11.1,WSME==0.9.3,xattr==0.9.6,yappi==1.0
  py23 run-test-pre: PYTHONHASHSEED='1359514857'
  py23 runtests: commands[0] | find . -type f -name '*.pyc' -delete
   summary 
_
py23: commands succeeded
congratulations :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1828966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828966] [NEW] tox doesn't catch invalid interpreter

2019-05-14 Thread Abhishek Kekane
Public bug reported:

When you pass invalid interpreter to tox command it installs
dependencies, requirements in virtual environment and exits without
running any tests. Actually it should throw error, Interpreter Not
Found.

tox -e py23
py23 create: /opt/stack/glance/.tox/py23
py23 installdeps: -r/opt/stack/glance/test-requirements.txt
py23 develop-inst: /opt/stack/glance
py23 installed: 
alabaster==0.7.12,alembic==1.0.10,amqp==2.4.2,appdirs==1.4.3,asn1crypto==0.24.0,automaton==1.16.0,Babel==2.6.0,cachetools==3.1.0,castellan==1.2.2,certifi==2019.3.9,cffi==1.12.3,chardet==3.0.4,cliff==2.14.1,cmd2==0.8.9,coverage==4.5.3,cryptography==2.6.1,cursive==0.2.2,ddt==1.2.1,debtcollector==1.21.0,decorator==4.4.0,defusedxml==0.6.0,dnspython==1.15.0,doc8==0.8.0,docutils==0.14,dogpile.cache==0.7.1,eventlet==0.24.1,extras==1.0.0,fasteners==0.14.1,fixtures==3.0.0,flake8==2.5.5,future==0.17.1,futurist==1.8.1,-e
 
git+https://git.openstack.org/openstack/glance.git@18e71c8e759aa4031da6258bff519ae206145fe6#egg=glance,glance-store==0.28.0,greenlet==0.4.15,hacking==0.12.0,httplib2==0.12.3,idna==2.8,imagesize==1.1.0,iso8601==0.1.12,Jinja2==2.10.1,jmespath==0.9.4,jsonpatch==1.23,jsonpointer==2.0,jsonschema==2.6.0,keystoneauth1==3.14.0,keystonemiddleware==6.0.0,kombu==4.5.0,linecache2==1.0.0,Mako==1.0.9,MarkupSafe==1.1.1,mccabe==0.2.1,mock==3.0.4,monotonic==1.5,mox3==0.27.0,msgpack==0.6.1,munch==2.3.2,netaddr==0.7.19,netifaces==0.10.9,networkx==2.3,openstacksdk==0.27.0,os-client-config==1.32.0,os-service-types==1.7.0,os-win==4.2.0,oslo.cache==1.34.0,oslo.concurrency==3.29.1,oslo.config==6.9.0,oslo.context==2.22.1,oslo.db==4.46.0,oslo.i18n==3.23.1,oslo.log==3.43.0,oslo.messaging==9.6.0,oslo.middleware==3.38.0,oslo.policy==2.2.0,oslo.serialization==2.29.0,oslo.service==1.38.0,oslo.upgradecheck==0.2.1,oslo.utils==3.41.0,oslotest==3.7.1,osprofiler==2.7.0,packaging==19.0,Paste==3.0.8,PasteDeploy==2.0.1,pbr==5.2.0,pep8==1.5.7,prettytable==0.7.2,psutil==5.6.2,psycopg2==2.8.2,pycadf==2.9.0,pycparser==2.19,pydot==1.4.1,pyflakes==0.8.1,Pygments==2.4.0,pyinotify==0.9.6,PyMySQL==0.9.3,pyOpenSSL==19.0.0,pyparsing==2.4.0,pyperclip==1.7.0,pysendfile==2.0.1,python-barbicanclient==4.8.1,python-dateutil==2.8.0,python-editor==1.0.4,python-keystoneclient==3.19.0,python-mimeparse==1.6.0,python-subunit==1.3.0,python-swiftclient==3.7.0,pytz==2019.1,PyYAML==5.1,repoze.lru==0.7,requests==2.21.0,requestsexceptions==1.4.0,restructuredtext-lint==1.3.0,retrying==1.3.3,rfc3986==1.3.1,Routes==2.4.1,simplegeneric==0.8.1,six==1.12.0,snowballstemmer==1.2.1,Sphinx==2.0.1,sphinxcontrib-applehelp==1.0.1,sphinxcontrib-devhelp==1.0.1,sphinxcontrib-htmlhelp==1.0.2,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.2,sphinxcontrib-serializinghtml==1.1.3,SQLAlchemy==1.2.18,sqlalchemy-migrate==0.12.0,sqlparse==0.3.0,statsd==3.3.0,stestr==2.3.1,stevedore==1.30.1,taskflow==3.4.0,Tempita==0.5.2,tenacity==5.0.4,testrepository==0.0.20,testresources==2.0.1,testscenarios==0.5.0,testtools==2.3.0,traceback2==1.4.0,unittest2==1.1.0,urllib3==1.24.3,vine==1.3.0,voluptuous==0.11.5,wcwidth==0.1.7,WebOb==1.8.5,wrapt==1.11.1,WSME==0.9.3,xattr==0.9.6,yappi==1.0
py23 run-test-pre: PYTHONHASHSEED='1359514857'
py23 runtests: commands[0] | find . -type f -name '*.pyc' -delete
 summary 
_
  py23: commands succeeded
  congratulations :)

** Affects: glance
 Importance: Low
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New

** Changed in: glance
   Importance: Undecided => Low

** Changed in: glance
 Assignee: (unassigned) => Abhishek Kekane (abhishek-kekane)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1828966

Title:
  tox doesn't catch invalid interpreter

Status in Glance:
  New

Bug description:
  When you pass invalid interpreter to tox command it installs
  dependencies, requirements in virtual environment and exits without
  running any tests. Actually it should throw error, Interpreter Not
  Found.

  tox -e py23
  py23 create: /opt/stack/glance/.tox/py23
  py23 installdeps: -r/opt/stack/glance/test-requirements.txt
  py23 develop-inst: /opt/stack/glance
  py23 installed: 
alabaster==0.7.12,alembic==1.0.10,amqp==2.4.2,appdirs==1.4.3,asn1crypto==0.24.0,automaton==1.16.0,Babel==2.6.0,cachetools==3.1.0,castellan==1.2.2,certifi==2019.3.9,cffi==1.12.3,chardet==3.0.4,cliff==2.14.1,cmd2==0.8.9,coverage==4.5.3,cryptography==2.6.1,cursive==0.2.2,ddt==1.2.1,debtcollector==1.21.0,decorator==4.4.0,defusedxml==0.6.0,dnspython==1.15.0,doc8==0.8.0,docutils==0.14,dogpile.cache==0.7.1,eventlet==0.24.1,extras==1.0.0,fasteners==0.14.1,fixtures==3.0.0,flake8==2.5.5,future==0.17.1,futurist==1.8.1,-e
 
git+https://git.openstack.org/openstack/glance.git@18e71c8e759aa4031da6258bff519ae206145fe6#egg=glance,glance-store==0.28.0,greenlet==0.4.15,hacking==0.12.0,httplib2==0.12.3,i

[Yahoo-eng-team] [Bug 1811181] Re: In the form to add an interface in router, when we do not select subnet, no error message is shown

2019-05-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/629755
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f62349a92bd90e913d5d56273cc542469216bf65
Submitter: Zuul
Branch:master

commit f62349a92bd90e913d5d56273cc542469216bf65
Author: pengyuesheng 
Date:   Thu Jan 10 11:26:29 2019 +0800

Add use_required_attribute = False in Add Interface Form

When an element is required and hidden,
the browser will report an error when submitting the form.

Here is the code location where Django adds the required attribute to the 
element:

https://github.com/django/django/blob/master/django/forms/boundfield.py#L221-L222

Change-Id: I68e1145efbe1837861aa1d66fceec497d6d97cb9
Closes-Bug: #1811181


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1811181

Title:
  In the form to add an interface in router, when we do not select
  subnet, no error message is shown

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Error message:An invalid form control with name='subnet_id' is not
  focusable

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1811181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828937] Re: Getting allocation candidates is slow with "placement microversion < 1.29" from rocky release

2019-05-14 Thread Tetsuro Nakamura
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** No longer affects: nova

** Description changed:

  Description
  ===
- In rocky cycle, 'GET /allocation_candidates' started to be aware of nested 
providers from microversion 1.29.  
+ In rocky cycle, 'GET /allocation_candidates' started to be aware of nested 
providers from microversion 1.29.
  From microversion 1.29, it can join allocations from resource providers in 
the same tree.
  
  To keep the behavior of microversion before 1.29, it filters nested providers 
[1]
  This function "_exclude_nested_providers()" is skipped on microversion >= 
1.29 but is heavy on microversion < 1.29.
  This is executed and still heavy even if there is no nested providers in the 
environment when microversion < 1.29.
  
  [1]
  
https://github.com/openstack/placement/blob/e69366675a2ee4532ae3039104b1a5ee8d775083/placement/handlers/allocation_candidate.py#L207-L238
  
  Steps to reproduce
  ==
  
  * Create about 6000 resource providers with some inventory and aggregates 
(using placeload [2])
  * Query "GET 
/allocation_candidates?resources=VCPU:1,DISK_GB:10,MEMORY_MB:256&member_of=${SOME_AGGREGATE}&required=${SOME_TRAIT}"
 with microversion 1.29 and 1.25
  
  [2] https://github.com/cdent/placeload/tree/master/placeload
  
  Expected (Ideal) result
  ==
  
  * No performance difference with microversion 1.25 <-> 1.29
  
- Actual (Ideal) result
+ Actual result
  ==
  
  * __15.995s__ for microversion 1.25
  * __5.541s__ for microversion 1.29
  
  with profiler enabled,
  
  * __32.219s__ for microversion 1.25 - Note that 24.1s(75%) is consumed in the 
"_exclude_nested_providers()"
  * __7.871s__ for microversion 1.29 - Note that this is roughly 32.219s - 
24.1s...

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1828937

Title:
  Getting allocation candidates is slow with "placement microversion <
  1.29" from rocky release

Status in OpenStack Compute (nova) rocky series:
  New

Bug description:
  Description
  ===
  In rocky cycle, 'GET /allocation_candidates' started to be aware of nested 
providers from microversion 1.29.
  From microversion 1.29, it can join allocations from resource providers in 
the same tree.

  To keep the behavior of microversion before 1.29, it filters nested providers 
[1]
  This function "_exclude_nested_providers()" is skipped on microversion >= 
1.29 but is heavy on microversion < 1.29.
  This is executed and still heavy even if there is no nested providers in the 
environment when microversion < 1.29.

  [1]
  
https://github.com/openstack/placement/blob/e69366675a2ee4532ae3039104b1a5ee8d775083/placement/handlers/allocation_candidate.py#L207-L238

  Steps to reproduce
  ==

  * Create about 6000 resource providers with some inventory and aggregates 
(using placeload [2])
  * Query "GET 
/allocation_candidates?resources=VCPU:1,DISK_GB:10,MEMORY_MB:256&member_of=${SOME_AGGREGATE}&required=${SOME_TRAIT}"
 with microversion 1.29 and 1.25

  [2] https://github.com/cdent/placeload/tree/master/placeload

  Expected (Ideal) result
  ==

  * No performance difference with microversion 1.25 <-> 1.29

  Actual result
  ==

  * __15.995s__ for microversion 1.25
  * __5.541s__ for microversion 1.29

  with profiler enabled,

  * __32.219s__ for microversion 1.25 - Note that 24.1s(75%) is consumed in the 
"_exclude_nested_providers()"
  * __7.871s__ for microversion 1.29 - Note that this is roughly 32.219s - 
24.1s...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/rocky/+bug/1828937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828937] [NEW] Getting allocation candidates is slow with "placement microversion < 1.29" from rocky release

2019-05-14 Thread Tetsuro Nakamura
Public bug reported:

Description
===
In rocky cycle, 'GET /allocation_candidates' started to be aware of nested 
providers from microversion 1.29.  
>From microversion 1.29, it can join allocations from resource providers in the 
>same tree.

To keep the behavior of microversion before 1.29, it filters nested providers 
[1]
This function "_exclude_nested_providers()" is skipped on microversion >= 1.29 
but is heavy on microversion < 1.29.
This is executed and still heavy even if there is no nested providers in the 
environment when microversion < 1.29.

[1]
https://github.com/openstack/placement/blob/e69366675a2ee4532ae3039104b1a5ee8d775083/placement/handlers/allocation_candidate.py#L207-L238

Steps to reproduce
==

* Create about 6000 resource providers with some inventory and aggregates 
(using placeload [2])
* Query "GET 
/allocation_candidates?resources=VCPU:1,DISK_GB:10,MEMORY_MB:256&member_of=${SOME_AGGREGATE}&required=${SOME_TRAIT}"
 with microversion 1.29 and 1.25

[2] https://github.com/cdent/placeload/tree/master/placeload

Expected (Ideal) result
==

* No performance difference with microversion 1.25 <-> 1.29

Actual (Ideal) result
==

* __15.995s__ for microversion 1.25
* __5.541s__ for microversion 1.29

with profiler enabled,

* __32.219s__ for microversion 1.25 - Note that 24.1s(75%) is consumed in the 
"_exclude_nested_providers()"
* __7.871s__ for microversion 1.29 - Note that this is roughly 32.219s - 
24.1s...

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1828937

Title:
  Getting allocation candidates is slow with "placement microversion <
  1.29" from rocky release

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  In rocky cycle, 'GET /allocation_candidates' started to be aware of nested 
providers from microversion 1.29.  
  From microversion 1.29, it can join allocations from resource providers in 
the same tree.

  To keep the behavior of microversion before 1.29, it filters nested providers 
[1]
  This function "_exclude_nested_providers()" is skipped on microversion >= 
1.29 but is heavy on microversion < 1.29.
  This is executed and still heavy even if there is no nested providers in the 
environment when microversion < 1.29.

  [1]
  
https://github.com/openstack/placement/blob/e69366675a2ee4532ae3039104b1a5ee8d775083/placement/handlers/allocation_candidate.py#L207-L238

  Steps to reproduce
  ==

  * Create about 6000 resource providers with some inventory and aggregates 
(using placeload [2])
  * Query "GET 
/allocation_candidates?resources=VCPU:1,DISK_GB:10,MEMORY_MB:256&member_of=${SOME_AGGREGATE}&required=${SOME_TRAIT}"
 with microversion 1.29 and 1.25

  [2] https://github.com/cdent/placeload/tree/master/placeload

  Expected (Ideal) result
  ==

  * No performance difference with microversion 1.25 <-> 1.29

  Actual (Ideal) result
  ==

  * __15.995s__ for microversion 1.25
  * __5.541s__ for microversion 1.29

  with profiler enabled,

  * __32.219s__ for microversion 1.25 - Note that 24.1s(75%) is consumed in the 
"_exclude_nested_providers()"
  * __7.871s__ for microversion 1.29 - Note that this is roughly 32.219s - 
24.1s...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1828937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp