Re: [Yahoo-eng-team] [Bug 1724895] Re: MTU not applied on private ethernet interfaces

2018-05-04 Thread Daniel Axtens
Glad that helped. I'm confused as to why cloud-init doesn't add the mac
addresses - it does in my VMs. Let's ask them; I'll add them to the bug.


** Description changed:

+ == cloud-init query ==
+ 
+ Cloud-init isn't adding MAC addresses in the match stanza for various
+ network interfaces in /etc/netplan/50-cloud-init.yaml, which is leading
+ to unexpected results. Is there any reason this would happen?
+ 
+ 
+ == Original Description ==
+ 
  Running nplan 0.30 on Ubuntu 17.10. From what I can tell, the
  ethernets/interface/mtu field isn't applied when testing against a
  private ethernet adapter.
  
  Using a netplan yaml file (/etc/netplan/10-ens7.yaml):
  
  ---
  
  network:
-   version: 2
-   renderer: networkd
-   ethernets:
- ens7:
-   mtu: 1450
-   dhcp4: no
-   addresses: [10.99.0.13/16]
+   version: 2
+   renderer: networkd
+   ethernets:
+ ens7:
+   mtu: 1450
+   dhcp4: no
+   addresses: [10.99.0.13/16]
  
  ---
  
  Then running "netplan apply" (or rebooting), yields the following:
  
  ---
  
  6: ens7:  mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
- link/ether 52:54:00:2a:19:bc brd ff:ff:ff:ff:ff:ff
- inet 10.99.0.13/16 brd 10.99.255.255 scope global ens7
-valid_lft forever preferred_lft forever
- inet6 fe80::5054:ff:fe2a:19bc/64 scope link
-valid_lft forever preferred_lft forever
+ link/ether 52:54:00:2a:19:bc brd ff:ff:ff:ff:ff:ff
+ inet 10.99.0.13/16 brd 10.99.255.255 scope global ens7
+    valid_lft forever preferred_lft forever
+ inet6 fe80::5054:ff:fe2a:19bc/64 scope link
+    valid_lft forever preferred_lft forever
  
  ---
  
  Is this a bug in netplan, or am I misunderstanding the YAML syntax?
  
  Best,
  Andrew

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1724895

Title:
  MTU not applied on private ethernet interfaces

Status in cloud-init:
  New
Status in netplan:
  New
Status in nplan package in Ubuntu:
  Confirmed

Bug description:
  == cloud-init query ==

  Cloud-init isn't adding MAC addresses in the match stanza for various
  network interfaces in /etc/netplan/50-cloud-init.yaml, which is
  leading to unexpected results. Is there any reason this would happen?

  
  == Original Description ==

  Running nplan 0.30 on Ubuntu 17.10. From what I can tell, the
  ethernets/interface/mtu field isn't applied when testing against a
  private ethernet adapter.

  Using a netplan yaml file (/etc/netplan/10-ens7.yaml):

  ---

  network:
    version: 2
    renderer: networkd
    ethernets:
  ens7:
    mtu: 1450
    dhcp4: no
    addresses: [10.99.0.13/16]

  ---

  Then running "netplan apply" (or rebooting), yields the following:

  ---

  6: ens7:  mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
  link/ether 52:54:00:2a:19:bc brd ff:ff:ff:ff:ff:ff
  inet 10.99.0.13/16 brd 10.99.255.255 scope global ens7
     valid_lft forever preferred_lft forever
  inet6 fe80::5054:ff:fe2a:19bc/64 scope link
     valid_lft forever preferred_lft forever

  ---

  Is this a bug in netplan, or am I misunderstanding the YAML syntax?

  Best,
  Andrew

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1724895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1769286] [NEW] NoVNCConsoleTestJSON.test_novnc intermittently fails with: SecurityProxyNegotiationFailed: Failed to negotiate security type with server: No compute auth available

2018-05-04 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/67/565367/10/gate/nova-
next/3e991f7/logs/screen-n-novnc-
cell1.txt.gz?level=TRACE#_May_04_21_50_42_066687

May 04 21:50:42.066687 ubuntu-xenial-rax-iad-0003876942 nova-novncproxy[30706]: 
ERROR nova.console.websocketproxy Traceback (most recent call last):
May 04 21:50:42.067054 ubuntu-xenial-rax-iad-0003876942 nova-novncproxy[30706]: 
ERROR nova.console.websocketproxy   File 
"/opt/stack/new/nova/nova/console/websocketproxy.py", line 293, in 
new_websocket_client
May 04 21:50:42.067426 ubuntu-xenial-rax-iad-0003876942 nova-novncproxy[30706]: 
ERROR nova.console.websocketproxy tsock = 
self.server.security_proxy.connect(tenant_sock, tsock)
May 04 21:50:42.067805 ubuntu-xenial-rax-iad-0003876942 nova-novncproxy[30706]: 
ERROR nova.console.websocketproxy   File 
"/opt/stack/new/nova/nova/console/securityproxy/rfb.py", line 173, in connect
May 04 21:50:42.068089 ubuntu-xenial-rax-iad-0003876942 nova-novncproxy[30706]: 
ERROR nova.console.websocketproxy reason=_("No compute auth available: %s") 
% six.text_type(e))
May 04 21:50:42.068314 ubuntu-xenial-rax-iad-0003876942 nova-novncproxy[30706]: 
ERROR nova.console.websocketproxy SecurityProxyNegotiationFailed: Failed to 
negotiate security type with server: No compute auth available: No matching 
auth scheme: allowed types: 'AuthType.NONE', desired types: '19'
May 04 21:50:42.068528 ubuntu-xenial-rax-iad-0003876942 nova-novncproxy[30706]: 
ERROR nova.console.websocketproxy 
May 04 21:50:42.090681 ubuntu-xenial-rax-iad-0003876942 nova-novncproxy[30706]: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22SecurityProxyNegotiationFailed%3A%20Failed%20to%20negotiate%20security%20type%20with%20server%3A%20No%20compute%20auth%20available%3A%20No%20matching%20auth%20scheme%3A%20allowed%20types%3A%20'AuthType.NONE'%2C%20desired%20types%3A%20'19'%5C%22%20AND%20tags%3A%5C%22screen-n
-novnc-cell1.txt%5C%22=7d

This might be a fluke since it's only happened on one change, but it was
in the gate and just a docs patch, so it should be unrelated. Could be
after these changes merged:

https://review.openstack.org/#/c/333990/

https://review.openstack.org/#/c/527812/

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: console vnc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1769286

Title:
  NoVNCConsoleTestJSON.test_novnc intermittently fails with:
  SecurityProxyNegotiationFailed: Failed to negotiate security type with
  server: No compute auth available: No matching auth scheme: allowed
  types: 'AuthType.NONE', desired types: '19'

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/67/565367/10/gate/nova-
  next/3e991f7/logs/screen-n-novnc-
  cell1.txt.gz?level=TRACE#_May_04_21_50_42_066687

  May 04 21:50:42.066687 ubuntu-xenial-rax-iad-0003876942 
nova-novncproxy[30706]: ERROR nova.console.websocketproxy Traceback (most 
recent call last):
  May 04 21:50:42.067054 ubuntu-xenial-rax-iad-0003876942 
nova-novncproxy[30706]: ERROR nova.console.websocketproxy   File 
"/opt/stack/new/nova/nova/console/websocketproxy.py", line 293, in 
new_websocket_client
  May 04 21:50:42.067426 ubuntu-xenial-rax-iad-0003876942 
nova-novncproxy[30706]: ERROR nova.console.websocketproxy tsock = 
self.server.security_proxy.connect(tenant_sock, tsock)
  May 04 21:50:42.067805 ubuntu-xenial-rax-iad-0003876942 
nova-novncproxy[30706]: ERROR nova.console.websocketproxy   File 
"/opt/stack/new/nova/nova/console/securityproxy/rfb.py", line 173, in connect
  May 04 21:50:42.068089 ubuntu-xenial-rax-iad-0003876942 
nova-novncproxy[30706]: ERROR nova.console.websocketproxy reason=_("No 
compute auth available: %s") % six.text_type(e))
  May 04 21:50:42.068314 ubuntu-xenial-rax-iad-0003876942 
nova-novncproxy[30706]: ERROR nova.console.websocketproxy 
SecurityProxyNegotiationFailed: Failed to negotiate security type with server: 
No compute auth available: No matching auth scheme: allowed types: 
'AuthType.NONE', desired types: '19'
  May 04 21:50:42.068528 ubuntu-xenial-rax-iad-0003876942 
nova-novncproxy[30706]: ERROR nova.console.websocketproxy 
  May 04 21:50:42.090681 ubuntu-xenial-rax-iad-0003876942 
nova-novncproxy[30706]: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22SecurityProxyNegotiationFailed%3A%20Failed%20to%20negotiate%20security%20type%20with%20server%3A%20No%20compute%20auth%20available%3A%20No%20matching%20auth%20scheme%3A%20allowed%20types%3A%20'AuthType.NONE'%2C%20desired%20types%3A%20'19'%5C%22%20AND%20tags%3A%5C%22screen-n
  -novnc-cell1.txt%5C%22=7d

  This might be a fluke since it's only happened on one change, but it
  was in the gate and just a docs patch, so it should be unrelated.
  Could be after these changes merged:

  

[Yahoo-eng-team] [Bug 1769283] [NEW] ImagePropertiesFilter has no default value resulting in unpredictable scheduling

2018-05-04 Thread Mohammed Naser
Public bug reported:

When using ImagePropertiesFilter for something like hardware
architecture, it can cause very unpredictable behaviour because of the
lack of default value.

In our case, a public cloud user will most likely upload an image
without `hw_architecture` defined anywhere (as most instruction and
general OpenStack documentation refers to).

However, in a case where there are multiple architectures available, the
images tagged with a specific architecture will go towards hypervisors
with that specific architecture.  However, those which are not tagged
will go to *any* hypervisor.

Because of how popular certain architectures are, it should be possible
to be able to set a 'default' value for the architecture as it is the
implied one, with the ability to override it if a user wants a specific
one.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1769283

Title:
  ImagePropertiesFilter has no default value resulting in unpredictable
  scheduling

Status in OpenStack Compute (nova):
  New

Bug description:
  When using ImagePropertiesFilter for something like hardware
  architecture, it can cause very unpredictable behaviour because of the
  lack of default value.

  In our case, a public cloud user will most likely upload an image
  without `hw_architecture` defined anywhere (as most instruction and
  general OpenStack documentation refers to).

  However, in a case where there are multiple architectures available,
  the images tagged with a specific architecture will go towards
  hypervisors with that specific architecture.  However, those which are
  not tagged will go to *any* hypervisor.

  Because of how popular certain architectures are, it should be
  possible to be able to set a 'default' value for the architecture as
  it is the implied one, with the ability to override it if a user wants
  a specific one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1769283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596119] Re: Can't delete instance with numa_topology after upgrading from kilo

2018-05-04 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596119

Title:
  Can't delete instance with numa_topology after upgrading from kilo

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Fix Released

Bug description:
  Using the RDO Kilo version of Nova (2015.1.0-3) with dedicated cpu
  pinning populates the numa_topology database field with data at
  "nova_object.version": "1.1". After upgrading to Liberty a new
  instance will be created with a 1.2 object version however already
  existing instances created under Kilo remain at 1.1. Attempting to do
  many actions on these instances (start, delete) will fail with the
  following error: RemoteError: Remote error: InvalidTargetVersion
  Invalid target version 1.2.

  Updating from Kilo to Mitaka produces the same problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1596119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596119] Re: Can't delete instance with numa_topology after upgrading from kilo

2018-05-04 Thread Matt Riedemann
** Changed in: nova/mitaka
   Status: New => Fix Released

** Changed in: nova/mitaka
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596119

Title:
  Can't delete instance with numa_topology after upgrading from kilo

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Fix Released

Bug description:
  Using the RDO Kilo version of Nova (2015.1.0-3) with dedicated cpu
  pinning populates the numa_topology database field with data at
  "nova_object.version": "1.1". After upgrading to Liberty a new
  instance will be created with a 1.2 object version however already
  existing instances created under Kilo remain at 1.1. Attempting to do
  many actions on these instances (start, delete) will fail with the
  following error: RemoteError: Remote error: InvalidTargetVersion
  Invalid target version 1.2.

  Updating from Kilo to Mitaka produces the same problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1596119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1769227] [NEW] Openstack quota update doesnt work for --secgroups.

2018-05-04 Thread Akancha
Public bug reported:

Openstack release: Mitaka
I am trying to update the "secgroups" quotas for project. Below is what I 
followed:
1> $openstack quota set --secgroups 60 ProjectX ---> And then when I tried 
$openstack quota show ProjectX, it did not update the Field "secgroups | 10" 
and it was still 10.

2> Then I tried to use $nova quota-update --security-groups 60 ProjectX.
And then again $openstack quota show ProjectX . This also did not update
the secgroups field.

3> $nova quota-show --tenant ProjectX -> However when I used this
command it gave the output "| security_groups | 60 | " which is correct.
But "openstack quota show command" doesnt give the right information.

4> From the horizon dashboard, the security group quotas is still 10.

Their is mismatch of the values of "secgroups" with different
commands(openstack and nova) and dashboard also did not reflect the
correct value.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1769227

Title:
  Openstack quota update doesnt work for --secgroups.

Status in OpenStack Compute (nova):
  New

Bug description:
  Openstack release: Mitaka
  I am trying to update the "secgroups" quotas for project. Below is what I 
followed:
  1> $openstack quota set --secgroups 60 ProjectX ---> And then when I tried 
$openstack quota show ProjectX, it did not update the Field "secgroups | 10" 
and it was still 10.

  2> Then I tried to use $nova quota-update --security-groups 60
  ProjectX. And then again $openstack quota show ProjectX . This also
  did not update the secgroups field.

  3> $nova quota-show --tenant ProjectX -> However when I used this
  command it gave the output "| security_groups | 60 | " which is
  correct. But "openstack quota show command" doesnt give the right
  information.

  4> From the horizon dashboard, the security group quotas is still 10.

  Their is mismatch of the values of "secgroups" with different
  commands(openstack and nova) and dashboard also did not reflect the
  correct value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1769227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750777] Re: openvswitch agent eating CPU, time spent in ip_conntrack.py

2018-05-04 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron - 2:12.0.1-0ubuntu2

---
neutron (2:12.0.1-0ubuntu2) cosmic; urgency=medium

  * d/p/remove-race-and-simplify-conntrack-state-management.patch:
Cherry-picked from upstream stable/queens branch to prevent
ovs-agent from eating up CPU (LP: #1750777).

 -- Corey Bryant   Wed, 02 May 2018 15:00:58
-0400

** Changed in: neutron (Ubuntu Cosmic)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750777

Title:
  openvswitch agent eating CPU, time spent in ip_conntrack.py

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive queens series:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Bionic:
  Fix Committed
Status in neutron source package in Cosmic:
  Fix Released

Bug description:
  We just ran into a case where the openvswitch agent (local dev
  destack, current master branch) eats 100% of CPU time.

  Pyflame profiling show the time being largely spent in
  neutron.agent.linux.ip_conntrack, line 95.

  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_conntrack.py#L95

  The code around this line is:

  while True:
  pool.spawn_n(self._process_queue)

  The documentation of eventlet.spawn_n says: "The same as spawn(), but
  it’s not possible to know how the function terminated (i.e. no return
  value or exceptions). This makes execution faster. See spawn_n for
  more details."  I suspect that GreenPool.spawn_n may behave similarly.

  It seems plausible that spawn_n is returning very quickly because of
  some error, and then all time is quickly spent in a short circuited
  while loop.

  SRU details for Ubuntu:
  ---
  [Impact]
  We're cherry-picking a single bug-fix patch here from the upstream 
stable/queens branch as there is not currently an upstream stable point release 
available that includes this fix. We'd like to make sure all of our supported 
customers have access to this fix as there is a significant performance hit 
without it.

  [Test Case]
  The following SRU process was followed:
  https://wiki.ubuntu.com/OpenStackUpdates

  In order to avoid regression of existing consumers, the OpenStack team
  will run their continuous integration test against the packages that
  are in -proposed. A successful run of all available tests will be
  required before the proposed packages can be let into -updates.

  The OpenStack team will be in charge of attaching the output summary
  of the executed tests. The OpenStack team members will not mark
  ‘verification-done’ until this has happened.

  [Regression Potential]
  In order to mitigate the regression potential, the results of the
  aforementioned tests are attached to this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1750777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615899] Re: [api-ref]: "Show images" should be changed to "List images"

2018-05-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/358946
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=71a3ac62351aaa15f10ea4bb0ef90baa2adf9eac
Submitter: Zuul
Branch:master

commit 71a3ac62351aaa15f10ea4bb0ef90baa2adf9eac
Author: Ha Van Tu 
Date:   Tue Aug 23 12:13:25 2016 +0700

[api-ref] "Show images" should be changed to "List images"

"List images" had been taken by the v1 API, but now that its
api reference has been removed, that's no longer a problem.
Can also remove the superfluous "an"s that were used in the
v2 titles to prevent sphinx conflicts.

Co-authored-by: Ha Van Tu 
Co-authored-by: Brian Rosmaita 

Change-Id: I2c1ce3f5c1279a93a08855c554e156c4fc115f61
Closes-Bug: #1615899


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1615899

Title:
  [api-ref]: "Show images" should be changed to "List images"

Status in Glance:
  Fix Released

Bug description:
  Image Service API v2: 
developer.openstack.org/api-ref/image/v2/index.html#show-images
  I think "show images" should be changed to "list images" to standardize API 
methods (list, show, create, update, delete)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1615899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1769131] Re: After cold-migration, disk.info file leftover on source host

2018-05-04 Thread Matt Riedemann
This is likely a side effect of the fix for bug 1728603
https://review.openstack.org/#/c/516395/.

** Tags added: libvirt resize

** Changed in: nova
   Status: New => Triaged

** Summary changed:

- After cold-migration, disk.info file leftover on source host
+ After cold-migration of a volume-backed instance, disk.info file leftover on 
source host

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1769131

Title:
  After cold-migration of a volume-backed instance, disk.info file
  leftover on source host

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New

Bug description:
  Tested using kolla-ansible, with kolla images stable/queens.

  In this setup there are only two compute nodes, with cinder/lvm for
  storage.

  A cirros instance is created, on compute02, and cold-migrated to
  compute01.

  At the step where it's awaiting confirmation, the following files can
  be found:

  compute01
  /var/lib/docker/volumes/nova_compute/_data/instances
  \-- 371e669b-0f15-49f2-9a84-bd1e89f34294
  \-- console.log

  compute02
  1 directory, 1 file
  /var/lib/docker/volumes/nova_compute/_data/instances
  \-- 371e669b-0f15-49f2-9a84-bd1e89f34294_resize
  \-- console.log

  1 directory, 1 file

  After confirming the migrate/resize, this becomes:

  compute01
  /var/lib/docker/volumes/nova_compute/_data/instances
  \-- 371e669b-0f15-49f2-9a84-bd1e89f34294
  \-- console.log

  compute02
  1 directory, 1 file
  /var/lib/docker/volumes/nova_compute/_data/instances
  \-- 371e669b-0f15-49f2-9a84-bd1e89f34294
  \-- disk.info

  1 directory, 1 file

  This log shows how after the _resize information is cleaned up, that
  *after this, this file ends up on the source host, where it is left.

  http://paste.openstack.org/show/720358/

  2018-05-04 12:55:10.818 7 DEBUG nova.compute.manager 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] [instance: 
371e669b-0f15-49f2-9a84-bd1e89f34294] Going to confirm migration 4 
do_confirm_resize /usr/lib/python2.7/site-packages/nova/compute/manager.py:3684
  2018-05-04 12:55:11.032 7 DEBUG oslo_concurrency.lockutils 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] Acquired semaphore 
"refresh_cache-371e669b-0f15-49f2-9a84-bd1e89f34294" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212
  2018-05-04 12:55:11.033 7 DEBUG nova.network.neutronv2.api 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] [instance: 
371e669b-0f15-49f2-9a84-bd1e89f34294] _get_instance_nw_info() 
_get_instance_nw_info 
/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1383
  2018-05-04 12:55:11.034 7 DEBUG nova.objects.instance 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] Lazy-loading 'info_cache' 
on Instance uuid 371e669b-0f15-49f2-9a84-bd1e89f34294 obj_load_attr 
/usr/lib/python2.7/site-packages/nova/objects/instance.py:1052
  2018-05-04 12:55:11.406 7 DEBUG nova.network.base_api 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] [instance: 
371e669b-0f15-49f2-9a84-bd1e89f34294] Updating instance_info_cache with 
network_info: [{"profile": {}, "ovs_interfaceid": 
"ba8646b4-fa66-46b9-9f7e-a83163668bb8", "preserve_on_delete": false, "network": 
{"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, "type": 
"fixed", "floating_ips": [], "address": "10.0.0.8"}], "version": 4, "meta": 
{"dhcp_server": "10.0.0.2"}, "dns": [{"meta": {}, "version": 4, "type": "dns", 
"address": "8.8.8.8"}], "routes": [], "cidr": "10.0.0.0/24", "gateway": 
{"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"}}], "meta": 
{"injected": false, "tenant_id": "7ea70c4f74c24199b14df0a570b6f93e", "mtu": 
1450}, "id": "f1d14432-5a26-4b0a-89e7-6683bd7d2477", "label": "demo-net"}, 
"devname": "tapba8646b4-fa", "vnic_type": "normal", "qbh_params": null, "meta": 
{}, "details": {"port_filter": true, "datapath_type": "system", 
"ovs_hybrid_plug": true}, "address": "fa:16:3e:d9:91:37", "active": true, 
"type": "ovs", "id": "ba8646b4-fa66-46b9-9f7e-a83163668bb8", "qbg_params": 
null}] update_instance_cache_with_nw_info 
/usr/lib/python2.7/site-packages/nova/network/base_api.py:48
  

[Yahoo-eng-team] [Bug 1747995] Fix merged to neutron-lib (master)

2018-05-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/565572
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=b8be5da69d046785b5b16ffb7c5122d9f28c74e8
Submitter: Zuul
Branch:master

commit b8be5da69d046785b5b16ffb7c5122d9f28c74e8
Author: Hongbin Lu 
Date:   Tue May 1 19:59:15 2018 +

api-ref: add filter parameters to resource management

Change-Id: I035d4faf9fe867dfcc90b47e22e8f54976a4da35
Partial-Bug: #1747995


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1747995

Title:
  api-ref: document all the filters of each resource

Status in neutron:
  Fix Released

Bug description:
  Currently, the only documentation about filters is:
  https://developer.openstack.org/api-ref/network/v2/#filtering-and-
  column-selection . It explains the high level behavior about filters
  but doesn't have detailed information about the exact list of filters
  for each resource.

  It would be nice if neutron can document all the filters for each API
  resources. This is desirable because the behavior of each filter is
  heterogeneous across different API resources. From end-users point of
  view, they need to know exactly which filter is supported by which
  resource and how to specify each filter value.

  If the filters documentation is missing or incomplete, it is hard to
  predict the API behavior which will cause issue like:
  https://bugs.launchpad.net/neutron/+bug/1418635 .

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1747995/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1769131] [NEW] After cold-migration, disk.info file leftover on source host

2018-05-04 Thread James McCarthy
Public bug reported:

Tested using kolla-ansible, with kolla images stable/queens.

In this setup there are only two compute nodes, with cinder/lvm for
storage.

A cirros instance is created, on compute02, and cold-migrated to
compute01.

At the step where it's awaiting confirmation, the following files can be
found:

compute01
/var/lib/docker/volumes/nova_compute/_data/instances
\-- 371e669b-0f15-49f2-9a84-bd1e89f34294
\-- console.log

compute02
1 directory, 1 file
/var/lib/docker/volumes/nova_compute/_data/instances
\-- 371e669b-0f15-49f2-9a84-bd1e89f34294_resize
\-- console.log

1 directory, 1 file

After confirming the migrate/resize, this becomes:

compute01
/var/lib/docker/volumes/nova_compute/_data/instances
\-- 371e669b-0f15-49f2-9a84-bd1e89f34294
\-- console.log

compute02
1 directory, 1 file
/var/lib/docker/volumes/nova_compute/_data/instances
\-- 371e669b-0f15-49f2-9a84-bd1e89f34294
\-- disk.info

1 directory, 1 file

This log shows how after the _resize information is cleaned up, that
*after this, this file ends up on the source host, where it is left.

http://paste.openstack.org/show/720358/

2018-05-04 12:55:10.818 7 DEBUG nova.compute.manager 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] [instance: 
371e669b-0f15-49f2-9a84-bd1e89f34294] Going to confirm migration 4 
do_confirm_resize /usr/lib/python2.7/site-packages/nova/compute/manager.py:3684
2018-05-04 12:55:11.032 7 DEBUG oslo_concurrency.lockutils 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] Acquired semaphore 
"refresh_cache-371e669b-0f15-49f2-9a84-bd1e89f34294" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212
2018-05-04 12:55:11.033 7 DEBUG nova.network.neutronv2.api 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] [instance: 
371e669b-0f15-49f2-9a84-bd1e89f34294] _get_instance_nw_info() 
_get_instance_nw_info 
/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1383
2018-05-04 12:55:11.034 7 DEBUG nova.objects.instance 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] Lazy-loading 'info_cache' 
on Instance uuid 371e669b-0f15-49f2-9a84-bd1e89f34294 obj_load_attr 
/usr/lib/python2.7/site-packages/nova/objects/instance.py:1052
2018-05-04 12:55:11.406 7 DEBUG nova.network.base_api 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] [instance: 
371e669b-0f15-49f2-9a84-bd1e89f34294] Updating instance_info_cache with 
network_info: [{"profile": {}, "ovs_interfaceid": 
"ba8646b4-fa66-46b9-9f7e-a83163668bb8", "preserve_on_delete": false, "network": 
{"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, "type": 
"fixed", "floating_ips": [], "address": "10.0.0.8"}], "version": 4, "meta": 
{"dhcp_server": "10.0.0.2"}, "dns": [{"meta": {}, "version": 4, "type": "dns", 
"address": "8.8.8.8"}], "routes": [], "cidr": "10.0.0.0/24", "gateway": 
{"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"}}], "meta": 
{"injected": false, "tenant_id": "7ea70c4f74c24199b14df0a570b6f93e", "mtu": 
1450}, "id": "f1d14432-5a26-4b0a-89e7-6683bd7d2477", "label": "demo-net"}, 
"devname": "tapba8646b4-fa", "vnic_type": "normal", "qbh_params": null, "meta": 
{}, "details": {"port_filter": true, "datapath_type": "system", 
"ovs_hybrid_plug": true}, "address": "fa:16:3e:d9:91:37", "active": true, 
"type": "ovs", "id": "ba8646b4-fa66-46b9-9f7e-a83163668bb8", "qbg_params": 
null}] update_instance_cache_with_nw_info 
/usr/lib/python2.7/site-packages/nova/network/base_api.py:48
2018-05-04 12:55:11.426 7 DEBUG oslo_concurrency.lockutils 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] Releasing semaphore 
"refresh_cache-371e669b-0f15-49f2-9a84-bd1e89f34294" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228
2018-05-04 12:55:11.426 7 DEBUG oslo_concurrency.processutils 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] Running cmd (subprocess): 
rm -rf /var/lib/nova/instances/371e669b-0f15-49f2-9a84-bd1e89f34294_resize 
execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2018-05-04 12:55:11.459 7 DEBUG oslo_concurrency.processutils 
[req-510561e2-eabb-4c37-8fc3-d56e9f50bf6e 64ca3042227c48ea84d77461b14b8acb 
7ea70c4f74c24199b14df0a570b6f93e - default default] CMD "rm -rf 
/var/lib/nova/instances/371e669b-0f15-49f2-9a84-bd1e89f34294_resize" returned: 
0 in 0.033s execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2018-05-04 12:55:11.462 7 DEBUG oslo_concurrency.lockutils 

[Yahoo-eng-team] [Bug 1769128] [NEW] Horizon does not report properly quota exceeded errors

2018-05-04 Thread Saverio Proto
Public bug reported:

This bug is in Openstack Pike, I dont have the possibility to test in
master at the moment.

When the user goes beyond quota to create cinder snapshots we observe
this in the cinder backend:

2018-05-04 13:43:13,172.172 24030 WARNING cinder.quota_utils [req-
562b1787-837b-49fb-9882-f3053cbd395e 6c3d8d645b31444f998d054ce5a3d4cd
ef11372569d145599dd9e62dbed6889a - default default] Quota exceeded for
ef11372569d145599dd9e62dbed6889a, tried to create 160G snapshot (160G of
200G already consumed)

However the user in Horizon gets this error message:

Error: unable to create snapshot.

And this is a problem because the user does not understand that it is a
problem with his quota, and he complains with the Openstack admins
believing it is a backend error.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1769128

Title:
  Horizon does not report properly quota exceeded errors

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This bug is in Openstack Pike, I dont have the possibility to test in
  master at the moment.

  When the user goes beyond quota to create cinder snapshots we observe
  this in the cinder backend:

  2018-05-04 13:43:13,172.172 24030 WARNING cinder.quota_utils [req-
  562b1787-837b-49fb-9882-f3053cbd395e 6c3d8d645b31444f998d054ce5a3d4cd
  ef11372569d145599dd9e62dbed6889a - default default] Quota exceeded for
  ef11372569d145599dd9e62dbed6889a, tried to create 160G snapshot (160G
  of 200G already consumed)

  However the user in Horizon gets this error message:

  Error: unable to create snapshot.

  And this is a problem because the user does not understand that it is
  a problem with his quota, and he complains with the Openstack admins
  believing it is a backend error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1769128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1769089] Re: Loading of Hot templates throws error sometimes

2018-05-04 Thread Akihiro Motoki
Which release are you using?

it is not clear how horizon itself is related to this, so we will remove
horizon from the affected projects from the bug maintenance perspective.
Once it turns out horizon itself is the cause or possible cause, feel
free to re-add horizon.

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1769089

Title:
  Loading of Hot templates throws error sometimes

Status in heat-dashboard:
  New

Bug description:
  From horizon some times when we try to upload hot templates from
  stacks panel, the following error seems to be appearing

  "'NoneType' object has no attribute 'encode'"

  But when we restart Apache server, Every thing works normally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat-dashboard/+bug/1769089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1769089] Re: Loading of Hot templates throws error sometimes

2018-05-04 Thread Sudheer Kalla
** Also affects: heat-dashboard
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1769089

Title:
  Loading of Hot templates throws error sometimes

Status in heat-dashboard:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  From horizon some times when we try to upload hot templates from
  stacks panel, the following error seems to be appearing

  "'NoneType' object has no attribute 'encode'"

  But when we restart Apache server, Every thing works normally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat-dashboard/+bug/1769089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1769089] [NEW] Loading of Hot templates throws error sometimes

2018-05-04 Thread Sudheer Kalla
Public bug reported:

>From horizon some times when we try to upload hot templates from stacks
panel, the following error seems to be appearing

"'NoneType' object has no attribute 'encode'"

But when we restart Apache server, Every thing works normally.

** Affects: heat-dashboard
 Importance: Undecided
 Status: New

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1769089

Title:
  Loading of Hot templates throws error sometimes

Status in heat-dashboard:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  From horizon some times when we try to upload hot templates from
  stacks panel, the following error seems to be appearing

  "'NoneType' object has no attribute 'encode'"

  But when we restart Apache server, Every thing works normally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat-dashboard/+bug/1769089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp