[Yahoo-eng-team] [Bug 1667033] Re: nova instance console log empty

2017-02-22 Thread ChristianEhrhardt
** Also affects: qemu (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: qemu (Ubuntu)
   Status: New => Triaged

** Changed in: qemu (Ubuntu)
   Importance: Undecided => High

** Changed in: qemu (Ubuntu)
 Assignee: (unassigned) => ChristianEhrhardt (paelzer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1667033

Title:
  nova instance console log empty

Status in OpenStack Compute (nova):
  New
Status in libvirt package in Ubuntu:
  New
Status in nova package in Ubuntu:
  Triaged
Status in qemu package in Ubuntu:
  Triaged

Bug description:
  Nova instance console log is empty on Xenial-Ocata with libvirt
  2.5.0-3ubuntu2~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1667033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667231] [NEW] ovs-agent error while processing VIF ports on compute

2017-02-22 Thread peng fei wang
Public bug reported:

I am using the M code on Centos7.2

and recently found the following error in compute node /var/log/neutron
/openvswitch-agent.log

it seems that for each serveral weeks ,there will be such an error.


.ovs_neutron_agent [req-ae1d74ba-967d-48f2-8ec1-2dcfe7f20991 - - - - -] Error 
while processing VIF ports
.ovs_neutron_agent Traceback (most recent call last):
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2037, in rpc_loop
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1651, in process_network_ports
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
292, in setup_port_filters
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
147, in decorated_function
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
172, in prepare_devices_filter
.ovs_neutron_agent   File "/usr/lib64/python2.7/contextlib.py", line 24, in 
__exit__
.ovs_neutron_agent self.gen.next()
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/firewall.py", line 128, in 
defer_apply
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py", 
line 833, in filter_defer_apply_off
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py", 
line 818, in _remove_conntrack_entries_from_sg_updates
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py", 
line 775, in _clean_deleted_sg_rule_conntrack_entries
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_conntrack.py", line 
78, in delete_conntrack_state_by_rule
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_conntrack.py", line 
72, in _delete_conntrack_state
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 116, in 
execute
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 102, in 
execute_rootwrap_daemon
.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 128, in execute
.ovs_neutron_agent res = proxy.run_one_command(cmd, stdin)
.ovs_neutron_agent   File "", line 2, in run_one_command
.ovs_neutron_agent   File "/usr/lib64/python2.7/multiprocessing/managers.py", 
line 773, in _callmethod
.ovs_neutron_agent raise convert_to_error(kind, result)
.ovs_neutron_agent RemoteError: 
.ovs_neutron_agent 
---
.ovs_neutron_agent Unserializable message: ('#ERROR', 
FilterMatchNotExecutable())
.ovs_neutron_agent 
---


[root@compute1 ~]# rpm -qa|grep openvswitch
openvswitch-2.5.0-2.el7.x86_64
openstack-neutron-openvswitch-8.1.2-el7.centos.noarch
python-openvswitch-2.5.0-2.el7.noarch
[root@compute1 ~]# 

-

anyone run into this

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667231

Title:
  ovs-agent error while processing VIF ports on compute

Status in neutron:
  New

Bug description:
  I am using the M code on Centos7.2

  and recently found the following error in compute node
  /var/log/neutron/openvswitch-agent.log

  it seems that for each serveral weeks ,there will be such an error.

  
  .ovs_neutron_agent [req-ae1d74ba-967d-48f2-8ec1-2dcfe7f20991 - - - - -] Error 
while processing VIF ports
  .ovs_neutron_agent Traceback (most recent call last):
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2037, in rpc_loop
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1651, in process_network_ports
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
292, in setup_port_filters
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
147, in decorated_function
  .ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
172, in prepare_devices_filter
  .ovs_neutron_agent   File "/usr/lib64/python2.7/contextlib.py", line 24, in 
__exit__
  .ovs_neutron_agent self.gen.nex

[Yahoo-eng-team] [Bug 1667203] [NEW] neutron performance degradation

2017-02-22 Thread Na Zhu
Public bug reported:

I test the runtime of neutron get_port() in Newton and Juno, the Newton
need more than twice as much as Juno, Newton need about 55ms, and Juno
need about 20ms. I log the time before
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L1300
and after this time.

I guess any database operation need more time in Newton than Juno.
The sqlalchemy core already submit a patch to optimize the time for the eager 
joinloader, which is used most by neutron.
Even with this patch, the time of get_port() is 35ms, also more than Juno.

I think neutron need consider the performance, not mainly focus on the
new features. I know Newton add more relationship for port, this is one
reason that cause the poor runtime, but that maybe not the only reason.

https://bitbucket.org/zzzeek/sqlalchemy/issues/3915/performance-
degradation-on-version-10xx


The salalchemy core said we can use baked query extension to fix this poor 
runtime.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667203

Title:
  neutron performance degradation

Status in neutron:
  New

Bug description:
  I test the runtime of neutron get_port() in Newton and Juno, the
  Newton need more than twice as much as Juno, Newton need about 55ms,
  and Juno need about 20ms. I log the time before
  
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L1300
  and after this time.

  I guess any database operation need more time in Newton than Juno.
  The sqlalchemy core already submit a patch to optimize the time for the eager 
joinloader, which is used most by neutron.
  Even with this patch, the time of get_port() is 35ms, also more than Juno.

  I think neutron need consider the performance, not mainly focus on the
  new features. I know Newton add more relationship for port, this is
  one reason that cause the poor runtime, but that maybe not the only
  reason.

  https://bitbucket.org/zzzeek/sqlalchemy/issues/3915/performance-
  degradation-on-version-10xx

  
  The salalchemy core said we can use baked query extension to fix this poor 
runtime.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667203/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667194] [NEW] [api] The param "X-Subject-Token" is not needed in API "GET /v3/auth/projects"

2017-02-22 Thread ZhangHongtao
Public bug reported:

In the API guide about "GET /v3/auth/projects", request param 
"X-Subject-Token" is needed and the description is "The authentication token. 
An authentication response returns the token ID in this header rather than in 
the response body.".
But, this API call returns the list of projects that are available to be 
scoped to based on the X-Auth-Token provided in the request, "X-Subject-Token" 
is needless.Otherwise, the description about request param "X-Auth-Token" says 
"A valid authentication token for an administrative user.", it is wrong, this 
API need not admin permission.

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

- In the API guide about "GET /v3/auth/projects", request param 
"X-Subject-Token" is needed and the description is "The authentication token. 
An authentication response returns the token ID in this header rather than in 
the response body.".
- But, this API call returns the list of projects that are available to be 
scoped to based on the X-Auth-Token provided in the request, "X-Subject-Token" 
is needless.Otherwise, the description about request param "X-Auth-Token" says 
"A valid authentication token for an administrative user.", it is wrong, this 
API need not admin permission.
+ In the API guide about "GET /v3/auth/projects", request param 
"X-Subject-Token" is needed and the description is "The authentication token. 
An authentication response returns the token ID in this header rather than in 
the response body.".
+ But, this API call returns the list of projects that are available to be 
scoped to based on the X-Auth-Token provided in the request, "X-Subject-Token" 
is needless.Otherwise, the description about request param "X-Auth-Token" says 
"A valid authentication token for an administrative user.", it is wrong, this 
API need not admin permission.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1667194

Title:
  [api] The param "X-Subject-Token" is not needed in API  "GET
  /v3/auth/projects"

Status in OpenStack Identity (keystone):
  New

Bug description:
  In the API guide about "GET /v3/auth/projects", request param 
"X-Subject-Token" is needed and the description is "The authentication token. 
An authentication response returns the token ID in this header rather than in 
the response body.".
  But, this API call returns the list of projects that are available to be 
scoped to based on the X-Auth-Token provided in the request, "X-Subject-Token" 
is needless.Otherwise, the description about request param "X-Auth-Token" says 
"A valid authentication token for an administrative user.", it is wrong, this 
API need not admin permission.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1667194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666936] Re: Placment API client doesn't honor insecure nor cafile parameters

2017-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/436475
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a377fc5988e9a6057bd14617879a7cbcc3c8bb81
Submitter: Jenkins
Branch:master

commit a377fc5988e9a6057bd14617879a7cbcc3c8bb81
Author: Gyorgy Szombathelyi 
Date:   Tue Feb 21 15:04:13 2017 +0100

Use the keystone session loader in the placement reporting

Using load_session_from_conf_options has the advantage that it honors
session settings like cafile and insecure, to make use of non-system TLS
certificates (or disable certificate checks at all). Also client
certificates and timeout values can be specified, too.

Closes-Bug: #1666936
Change-Id: I510a2683958fc8c3aaca9293b4280f325b9551fc


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666936

Title:
  Placment API client doesn't honor insecure nor cafile parameters

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  The connection to the Placement API from Nova doesn't allow to specify
  an insecure nor a cafile parameter. This could be a problem if those
  are used in the Keystone or Neutron clients, for example, then it is
  expected to use them in the Placement client, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667154] [NEW] User id on action log shouldn't be a link

2017-02-22 Thread Ying Zuo
Public bug reported:

The user id field in the action log table on project/instance details
panel is currently a link to the user details panel. If a non-admin user
clicks on it, he will be logged out since he doesn't have the permission
to access the panel.

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yingzuo)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yingzuo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1667154

Title:
  User id on action log shouldn't be a link

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The user id field in the action log table on project/instance details
  panel is currently a link to the user details panel. If a non-admin
  user clicks on it, he will be logged out since he doesn't have the
  permission to access the panel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1667154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667138] [NEW] Minumum bandwidth can be higher than maximum bandwidth limit in same QoS policy

2017-02-22 Thread Slawek Kaplonski
Public bug reported:

Currently at least SR-IOV driver supports both QoS rules: bandwidth limit and 
minimum bandwidth. User can set in one policy both of such rules and set higher 
minimum bandwidth (best effor) then maximum available bandwidth for port.
IMO such behaviour will be undefined on backend and should be forbidden somehow 
on API level.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667138

Title:
  Minumum bandwidth can be higher than maximum bandwidth limit in same
  QoS policy

Status in neutron:
  New

Bug description:
  Currently at least SR-IOV driver supports both QoS rules: bandwidth limit and 
minimum bandwidth. User can set in one policy both of such rules and set higher 
minimum bandwidth (best effor) then maximum available bandwidth for port.
  IMO such behaviour will be undefined on backend and should be forbidden 
somehow on API level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666936] Re: Placment API client doesn't honor insecure nor cafile parameters

2017-02-22 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova/ocata
 Assignee: (unassigned) => Sean Dague (sdague)

** Changed in: nova/ocata
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666936

Title:
  Placment API client doesn't honor insecure nor cafile parameters

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  The connection to the Placement API from Nova doesn't allow to specify
  an insecure nor a cafile parameter. This could be a problem if those
  are used in the Keystone or Neutron clients, for example, then it is
  expected to use them in the Placement client, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635306] Re: After newton deployment _member_ role is missing in keystone

2017-02-22 Thread mellotron
** Also affects: puppet-keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1635306

Title:
  After newton deployment _member_ role is missing in keystone

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  Fix Committed
Status in puppet-keystone:
  New
Status in tripleo:
  Invalid

Bug description:
  I did a full deployment using RDO Newton and at the end of deployment
  i see _member_ role is missing.

  [stack@topstrio1101 ~]$ openstack role list
  +--+-+
  | ID   | Name|
  +--+-+
  | 023e0f4fc56a47f7bada5fd512bab014 | swiftoperator   |
  | 48e4519e09b4469bbbf5c533830d3ad8 | heat_stack_user |
  | 52be634093e14ea7a1acdf3f5ec12066 | admin   |
  | a1f8e6636dc842d8a896a3e903298997 | ResellerAdmin   |
  +--+-+

  In Mitaka _member_ role has been created correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1635306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518392] Re: [RFE] Enable arp_responder without l2pop

2017-02-22 Thread venkata anil
** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518392

Title:
  [RFE] Enable arp_responder without l2pop

Status in neutron:
  New

Bug description:
  Remove the dependency between arp_responder and l2_pop.
  arp_responder should be enabled without enabling l2_pop.
  setup arp_responder on OVS integration bridge

  To enable arp_responder, we only need port's MAC and IP Address and no tunnel 
ip(So no need for l2pop). 
  Currently agents use l2pop notifications to create ARP entries. With the new 
approach, agents can use
  port events(create, update and delete) to create ARP entry and without l2pop 
notifications.

  The advantages with this approach for both linuxbridge and OVS agent -
  1) Users can enable arp_responder without l2pop 
  2) Support ARP for distributed ports(DVR and HA).
 Currently, ARP is not added for these ports.
 This is a fix for https://bugs.launchpad.net/neutron/+bug/1661717

  This allows to create ARP entries on OVS integration bridge.

  Advantages for OVS agent, if ARP entries are setup on integration 
bridge(br-int) rather than on tunneling bridge(br-tun)
  1) It enables arp_responder for all network types(vlans, vxlan, etc)
 arp_responder based on l2pop is supported for only overlay networks.
  2) ARP can be resolved within br-int.
  3) ARP packets for local ports(ports connected to same br-int) will be 
resolved
 in br-int without broadcasting to actual ports connected to br-int.

  
  Already submitted https://review.openstack.org/#/c/248177/ for this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667070] [NEW] Mapping a federated user to a local user does not return concrete role assignments

2017-02-22 Thread Ron De Rose
Public bug reported:

When mapping a federated user to a local user, only federated projects
and roles are returned; not the local user's concrete role assignments
and projects.

Will update this with a mapping example and steps to reproduce.

** Affects: keystone
 Importance: Undecided
 Assignee: Ron De Rose (ronald-de-rose)
 Status: New


** Tags: federation

** Changed in: keystone
 Assignee: (unassigned) => Ron De Rose (ronald-de-rose)

** Tags added: federation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1667070

Title:
  Mapping a federated user to a local user does not return concrete role
  assignments

Status in OpenStack Identity (keystone):
  New

Bug description:
  When mapping a federated user to a local user, only federated projects
  and roles are returned; not the local user's concrete role assignments
  and projects.

  Will update this with a mapping example and steps to reproduce.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1667070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626702] Re: Instances deletes should be asynchronous with host availability

2017-02-22 Thread Craig Bookwalter
Attaching nova-compute logs for the above

** Attachment added: "nova-log-liberty.log"
   
https://bugs.launchpad.net/nova/+bug/1626702/+attachment/4824742/+files/nova-log-liberty.log

** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1626702

Title:
  Instances deletes should be asynchronous with host availability

Status in OpenStack Compute (nova):
  New

Bug description:
  Description: When a host goes down and instances on that host are
  deleted while the host is down, this request should be tracked in the
  database so that when the host comes back up it can scrub the delete
  request against the status of the active instance and delete it if
  necessary. The overall goal is for the delete to be processed so that
  any attached volumes return to an available state instead of being
  left in an in-use state. (attached to a non-existent instance)

  This is being experienced while using Cloud Foundry. When the host
  goes down, Bosh attempts to re-create the instance along with
  associating any previous known volumes.

  This has been tested on the following tag:
  10.1.18

  If this was not an issue, I would expect the delete request to go
  through while the host was down and the system itself would clean up
  the mess once the host comes back online freeing up the volume for use
  immediately.

  (This is going to be tested in Liberty as well and further notes will
  be attached to the bug report)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1626702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667033] Re: nova instance console log empty

2017-02-22 Thread James Page
Note that this is with OpenStack Ocata, with the libvirt from zesty
which is version 2.5.0.

This should be using the virtlogd capabilities of libvirt/qemu, but for
some reason that's not actually working.

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu)
   Status: New => Triaged

** Changed in: nova (Ubuntu)
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1667033

Title:
  nova instance console log empty

Status in OpenStack Compute (nova):
  New
Status in libvirt package in Ubuntu:
  New
Status in nova package in Ubuntu:
  Triaged

Bug description:
  Nova instance console log is empty on Xenial-Ocata with libvirt
  2.5.0-3ubuntu2~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1667033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665366] Re: [RFE] Add --key-name option to 'nova rebuild'

2017-02-22 Thread Sylvain Bauza
Nova has a process specifying that RFEs need to be documented using
Launchpad's blueprints. We only use Launchpad bugs for getting the list
of *bugs* that are existing (or existed) and we don't generally use the
Wishlist status for getting the list of features we want to work on
(even if we still have wishlist bugs in our project page, but that's
just for legacy reasons).

Please look at http://docs.openstack.org/developer/nova/process.html
#how-do-i-get-my-code-merged for understanding how to help Nova.

Triaging this bug report as Invalid then, please open a blueprint and
create a Nova-specs change given it would need an API modification.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665366

Title:
  [RFE] Add --key-name option to 'nova rebuild'

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Currently there is no way to change key-name associated with instance.
  This has some justification as key may be downloaded only at build
  time and later changes will be ignored by instance.

  But this is not a case for rebuild command. If tenant want to rebuild
  instance, he may wants to change key used to access that instance.

  Main reason for 'rebuild' command instead of 'delete/create' often
  lies in area of preserving network settings - fixed ips, mac
  addresses, assosiated floatings IPs. Normally user want to keep the
  same ssh key as at creation time, but occasionally he (she) may want
  to replace it.

  Right now there is no such option.

  TL;DR; Please add --key-name option to nova rebuild command (and API).

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1665366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667033] Re: nova instance console log empty

2017-02-22 Thread Ryan Beisner
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1667033

Title:
  nova instance console log empty

Status in OpenStack Compute (nova):
  New
Status in libvirt package in Ubuntu:
  New
Status in nova package in Ubuntu:
  Triaged

Bug description:
  Nova instance console log is empty on Xenial-Ocata with libvirt
  2.5.0-3ubuntu2~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1667033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667055] [NEW] Fullstack: neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router fails

2017-02-22 Thread Jakub Libosvar
Public bug reported:

20 hits in 7 days

logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%222%20!%3D%201%3A%20HA%20router%20must%20be%20scheduled%20to%20both%20nodes%5C%22%20AND%20tags%3Aconsole%20AND%20build_branch%3Amaster

Test failure example: http://logs.openstack.org/41/437041/1/check/gate-
neutron-dsvm-fullstack-ubuntu-
xenial/2fe600c/console.html#_2017-02-22_17_46_50_795749


2017-02-22 17:46:50.893183 | 2017-02-22 17:46:50.892 | Captured traceback:
2017-02-22 17:46:50.896340 | 2017-02-22 17:46:50.895 | ~~~
2017-02-22 17:46:50.899564 | 2017-02-22 17:46:50.898 | Traceback (most 
recent call last):
2017-02-22 17:46:50.902799 | 2017-02-22 17:46:50.902 |   File 
"neutron/tests/base.py", line 116, in func
2017-02-22 17:46:50.906044 | 2017-02-22 17:46:50.905 | return f(self, 
*args, **kwargs)
2017-02-22 17:46:50.909335 | 2017-02-22 17:46:50.908 |   File 
"neutron/tests/fullstack/test_l3_agent.py", line 202, in test_ha_router
2017-02-22 17:46:50.912626 | 2017-02-22 17:46:50.911 | 'HA router must 
be scheduled to both nodes')
2017-02-22 17:46:50.915910 | 2017-02-22 17:46:50.915 |   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
2017-02-22 17:46:50.919219 | 2017-02-22 17:46:50.918 | 
self.assertThat(observed, matcher, message)
2017-02-22 17:46:50.922394 | 2017-02-22 17:46:50.921 |   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
2017-02-22 17:46:50.925527 | 2017-02-22 17:46:50.924 | raise 
mismatch_error
2017-02-22 17:46:50.928784 | 2017-02-22 17:46:50.927 | 
testtools.matchers._impl.MismatchError: 2 != 1: HA router must be scheduled to 
both nodes

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667055

Title:
  Fullstack:
  neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router
  fails

Status in neutron:
  New

Bug description:
  20 hits in 7 days

  logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%222%20!%3D%201%3A%20HA%20router%20must%20be%20scheduled%20to%20both%20nodes%5C%22%20AND%20tags%3Aconsole%20AND%20build_branch%3Amaster

  Test failure example: http://logs.openstack.org/41/437041/1/check
  /gate-neutron-dsvm-fullstack-ubuntu-
  xenial/2fe600c/console.html#_2017-02-22_17_46_50_795749

  
  2017-02-22 17:46:50.893183 | 2017-02-22 17:46:50.892 | Captured traceback:
  2017-02-22 17:46:50.896340 | 2017-02-22 17:46:50.895 | ~~~
  2017-02-22 17:46:50.899564 | 2017-02-22 17:46:50.898 | Traceback (most 
recent call last):
  2017-02-22 17:46:50.902799 | 2017-02-22 17:46:50.902 |   File 
"neutron/tests/base.py", line 116, in func
  2017-02-22 17:46:50.906044 | 2017-02-22 17:46:50.905 | return f(self, 
*args, **kwargs)
  2017-02-22 17:46:50.909335 | 2017-02-22 17:46:50.908 |   File 
"neutron/tests/fullstack/test_l3_agent.py", line 202, in test_ha_router
  2017-02-22 17:46:50.912626 | 2017-02-22 17:46:50.911 | 'HA router 
must be scheduled to both nodes')
  2017-02-22 17:46:50.915910 | 2017-02-22 17:46:50.915 |   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2017-02-22 17:46:50.919219 | 2017-02-22 17:46:50.918 | 
self.assertThat(observed, matcher, message)
  2017-02-22 17:46:50.922394 | 2017-02-22 17:46:50.921 |   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2017-02-22 17:46:50.925527 | 2017-02-22 17:46:50.924 | raise 
mismatch_error
  2017-02-22 17:46:50.928784 | 2017-02-22 17:46:50.927 | 
testtools.matchers._impl.MismatchError: 2 != 1: HA router must be scheduled to 
both nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667040] [NEW] InstanceNotFound exception during check_instance_exists on instance evacuation

2017-02-22 Thread Steven Webster
Public bug reported:

Description
===
During instance evacuation to a new destination node, the InstanceNotFound 
exception is seen when check_instance_exists is called during the rebuild.

check_instance_exists is making sure the instance does _not_ exist, with
the intent that an exception will be raised if the instance does already
exist on a destination node.

The libvirt driver's instance_exists currently catches the InternalError
exception, but not the InstanceNotFound exception.

I believe this only affects the libvirt instance_exists implementation.

Steps to reproduce
==
On a multi node libvirt/kvm system:

1. launch an instance on compute A
2. kill compute A nova-compute process
3. evacuate the instance on compute A

Environment
===
1. Openstack release: Ocata + devstack

2. Which hypervisor did you use?
   Libvirt + KVM
   What's the version of that?

2. Which storage type did you use?
   LVM

3. Which networking type did you use?
   Neutron with OpenVSwitch

** Affects: nova
 Importance: Undecided
 Assignee: Stephen Finucane (stephenfinucane)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1667040

Title:
  InstanceNotFound exception during check_instance_exists on instance
  evacuation

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  During instance evacuation to a new destination node, the InstanceNotFound 
exception is seen when check_instance_exists is called during the rebuild.

  check_instance_exists is making sure the instance does _not_ exist,
  with the intent that an exception will be raised if the instance does
  already exist on a destination node.

  The libvirt driver's instance_exists currently catches the
  InternalError exception, but not the InstanceNotFound exception.

  I believe this only affects the libvirt instance_exists
  implementation.

  Steps to reproduce
  ==
  On a multi node libvirt/kvm system:

  1. launch an instance on compute A
  2. kill compute A nova-compute process
  3. evacuate the instance on compute A

  Environment
  ===
  1. Openstack release: Ocata + devstack

  2. Which hypervisor did you use?
 Libvirt + KVM
 What's the version of that?

  2. Which storage type did you use?
 LVM

  3. Which networking type did you use?
 Neutron with OpenVSwitch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1667040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666864] Re: function does not return correct create router timestamp

2017-02-22 Thread Victor Morales
The other thing that I noticed is that the revision_number differs from
the initial version(3) to the latest version(8), isn't another process
doing something in parallel that modifies this object?

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666864

Title:
  function does not return correct create router timestamp

Status in neutron:
  Invalid

Bug description:
  Version: OSP-10 Newton
  Test failed : 
neutron.tests.tempest.api.test_timestamp.TestTimeStampWithL3.test_show_router_attribute_with_timestamp

  At creation the timestamp is correct: u'created_at':
  u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z'

  when "show" function display the timestamp it display it with ~3 sec 
difference:
  u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:26Z'

  show function display incorrect timestamp

  218 def test_show_router_attribute_with_timestamp(self):
  219 router = self.create_router(router_name='test')
  220 import ipdb;ipdb.set_trace()
  --> 221 body = self.client.show_router(router['id'])
  222 show_router = body['router']
  223 # verify the timestamp from creation and showed is same
  224 import ipdb;ipdb.set_trace()
  225 self.assertEqual(router['created_at'],
  226  show_router['created_at'])

  ipdb> router
  {u'status': u'ACTIVE', u'external_gateway_info': None, 
u'availability_zone_hints': [], u'availability_zones': [], u'description': u'', 
u'admin_state_up': False, u'tenant_id': u'8b9cd1ebd13f4172a0c63789ee9c0de2', 
u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z', 
u'flavor_id': None, u'revision_number': 3, u'routes': [], u'project_id': 
u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'id': 
u'545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', u'name': u'test'}
  ipdb> n
  2017-02-22 10:47:55.084 6919 INFO tempest.lib.common.rest_client 
[req-eef7ded4-bb01-4401-96cd-325c01b2230b ] Request 
(TestTimeStampWithL3:test_show_router_attribute_with_timestamp): 200 GET 
http://10.0.0.104:9696/v2.0/routers/545e74b0-2f3d-43b8-8678-93bbb3f1f6f3 0.224s
  2017-02-22 10:47:55.086 6919 DEBUG tempest.lib.common.rest_client 
[req-eef7ded4-bb01-4401-96cd-325c01b2230b ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'status': '200', u'content-length': '462', 
'content-location': 
'http://10.0.0.104:9696/v2.0/routers/545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', 
u'date': 'Wed, 22 Feb 2017 10:47:56 GMT', u'content-type': 'application/json', 
u'connection': 'close', u'x-openstack-request-id': 
'req-eef7ded4-bb01-4401-96cd-325c01b2230b'}
  Body: {"router": {"status": "ACTIVE", "external_gateway_info": null, 
"availability_zone_hints": [], "availability_zones": ["nova"], "description": 
"", "admin_state_up": false, "tenant_id": "8b9cd1ebd13f4172a0c63789ee9c0de2", 
"created_at": "2017-02-22T10:47:24Z", "updated_at": "2017-02-22T10:47:26Z", 
"flavor_id": null, "revision_number": 8, "routes": [], "project_id": 
"8b9cd1ebd13f4172a0c63789ee9c0de2", "id": 
"545e74b0-2f3d-43b8-8678-93bbb3f1f6f3", "name": "test"}} _log_request_full 
tempest/lib/common/rest_client.py:431
  > 
/home/centos/tempest-upstream/neutron/neutron/tests/tempest/api/test_timestamp.py(222)test_show_router_attribute_with_timestamp()

  
  ipdb> router
  {u'status': u'ACTIVE', u'external_gateway_info': None, 
u'availability_zone_hints': [], u'availability_zones': [], u'description': u'', 
u'admin_state_up': False, u'tenant_id': u'8b9cd1ebd13f4172a0c63789ee9c0de2', 
u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z', 
u'flavor_id': None, u'revision_number': 3, u'routes': [], u'project_id': 
u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'id': 
u'545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', u'name': u'test'}
  ipdb> show_router
  {u'status': u'ACTIVE', u'external_gateway_info': None, 
u'availability_zone_hints': [], u'availability_zones': [u'nova'], 
u'description': u'', u'admin_state_up': False, u'tenant_id': 
u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'created_at': u'2017-02-22T10:47:24Z', 
u'updated_at': u'2017-02-22T10:47:26Z', u'flavor_id': None, u'revision_number': 
8, u'routes': [], u'project_id': u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'id': 
u'545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', u'name': u'test'}
  ipdb>

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510234] Re: Heartbeats stop when time is changed

2017-02-22 Thread Roman Podoliaka
Change to Nova: https://review.openstack.org/#/c/434327/

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1510234

Title:
  Heartbeats stop when time is changed

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.service:
  Fix Released

Bug description:
  Heartbeats stop working when you mess with the system time. If a
  monotonic clock were used, they would continue to work when the system
  time was changed.

  Steps to reproduce:

  1. List the nova services ('nova-manage service list'). Note that the
  'State' for each services is a happy face ':-)'.

  2. Move the time ahead (for example 2 hours in the future), and then
  list the nova services again. Note that heartbeats continue to work
  and use the future time (see 'Updated_At').

  3. Revert back to the actual time, and list the nova services again.
  Note that all heartbeats stop, and have a 'State' of 'XXX'.

  4. The heartbeats will start again in 2 hours when the actual time
  catches up to the future time, or if you restart the services.

  5. You'll see a log message like the following when the heartbeats
  stop:

  2015-10-26 17:14:10.538 DEBUG nova.servicegroup.drivers.db [req-
  c41a2ad7-e5a5-4914-bdc8-6c1ca8b224c6 None None] Seems service is down.
  Last heartbeat was 2015-10-26 17:20:20. Elapsed time is -369.461679
  from (pid=13994) is_up
  /opt/stack/nova/nova/servicegroup/drivers/db.py:80

  Here's example output demonstrating the issue:

  http://paste.openstack.org/show/477404/

  See bug #1450438 for more context:

  https://bugs.launchpad.net/oslo.service/+bug/1450438

  Long story short: looping call is using the built-in time rather than
  a  monotonic clock for sleeps.

  
https://github.com/openstack/oslo.service/blob/3d79348dae4d36bcaf4e525153abf74ad4bd182a/oslo_service/loopingcall.py#L122

  Oslo Service: version 0.11
  Nova: master (commit 2c3f9c339cae24576fefb66a91995d6612bb4ab2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1510234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666949] [NEW] PCI devices cannot be detached if all the PCI devices are claimed

2017-02-22 Thread Claudiu Belu
Public bug reported:

Description
===

PCI devices are requested through flavor extra_specs, and they can de
attached / detached by resizing to a flavor having more / fewer PCI
devices in the extra_specs.

But, if all the PCI devices have been claimed, they cannot be detached
anymore as cold resize will fail.


Steps to reproduce
==

* Let's say we have 2 PCI devices whitelisted on a compute node.
* Create a flavor which requires both PCI devices.
* Create an instance with said flavor.
* Resize the instance to a flavor without any PCI devices.


Expected result
===

The instance should be resized, and have no PCI devices attached.


Actual result
=

The instance is in ERROR state, the cold resize failed.


Logs & Configs
==

[1] resize attempt, n-api.log, n-sch.log: 
http://paste.openstack.org/show/598884/
[2] #openstack-nova discussion: 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-02-14.log.html#t2017-02-14T19:45:02

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: pci resize scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666949

Title:
  PCI devices cannot be detached if all the PCI devices are claimed

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  PCI devices are requested through flavor extra_specs, and they can de
  attached / detached by resizing to a flavor having more / fewer PCI
  devices in the extra_specs.

  But, if all the PCI devices have been claimed, they cannot be detached
  anymore as cold resize will fail.

  
  Steps to reproduce
  ==

  * Let's say we have 2 PCI devices whitelisted on a compute node.
  * Create a flavor which requires both PCI devices.
  * Create an instance with said flavor.
  * Resize the instance to a flavor without any PCI devices.

  
  Expected result
  ===

  The instance should be resized, and have no PCI devices attached.

  
  Actual result
  =

  The instance is in ERROR state, the cold resize failed.

  
  Logs & Configs
  ==

  [1] resize attempt, n-api.log, n-sch.log: 
http://paste.openstack.org/show/598884/
  [2] #openstack-nova discussion: 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-02-14.log.html#t2017-02-14T19:45:02

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666936] [NEW] Placment API client doesn't honor insecure nor cafile parameters

2017-02-22 Thread György Szombathelyi
Public bug reported:

The connection to the Placement API from Nova doesn't allow to specify
an insecure nor a cafile parameter. This could be a problem if those are
used in the Keystone or Neutron clients, for example, then it is
expected to use them in the Placement client, too.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666936

Title:
  Placment API client doesn't honor insecure nor cafile parameters

Status in OpenStack Compute (nova):
  New

Bug description:
  The connection to the Placement API from Nova doesn't allow to specify
  an insecure nor a cafile parameter. This could be a problem if those
  are used in the Keystone or Neutron clients, for example, then it is
  expected to use them in the Placement client, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666915] [NEW] Config drive test has bad network_info

2017-02-22 Thread daniel.pawlik
Public bug reported:

Unit test for testing configdrive has bad network_info.

Normally network_info should have a list with dicts (informations about
instance network interfaces).


In test_create_configdrive network_info is mocked and its not a list.

** Affects: nova
 Importance: Undecided
 Assignee: daniel.pawlik (daniel-pawlik)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => daniel.pawlik (daniel-pawlik)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666915

Title:
  Config drive test has bad network_info

Status in OpenStack Compute (nova):
  New

Bug description:
  Unit test for testing configdrive has bad network_info.

  Normally network_info should have a list with dicts (informations
  about instance network interfaces).

  
  In test_create_configdrive network_info is mocked and its not a list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666915/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622460] Re: neutron-server report 'FirewallNotFound' when delete firewall under l3_ha mode

2017-02-22 Thread Cedric Brandily
*** This bug is a duplicate of bug 1658060 ***
https://bugs.launchpad.net/bugs/1658060

** This bug has been marked a duplicate of bug 1658060
   FirewallNotFound exceptions when deleting the firewall in FWaaS-DVR

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622460

Title:
  neutron-server report 'FirewallNotFound' when delete firewall under
  l3_ha mode

Status in neutron:
  Confirmed

Bug description:
  When we delete a firewall under l3-ha mode, some neutron-servers
  reports error logs: 'FirewallNotFound':

  Environment:
  * Openstack Kilo version
  * Three neutron servers using ha-proxy in balance roundrobin mode, and 
provides VIP (keepalived)
  * l3_ha=True is set in neutron-servers to provide L3 HA.
  * Three l3-agents on 3 network nodes

  We found 2 out of 3 neutron-servers print the following error logs

  ---
  Error logs:
  ===
  2016-09-12 14:33:34.250 22722 DEBUG oslo_messaging._drivers.amqp [-] unpacked 
context: {u'read_deleted': u'no', u'project_name': u'zhaoyi', u'user_id': 
u'5f228382dd8d4001bd079cfab624e870', u'roles': [u'_member_', u'Member', 
u'user', u'admin'], u'tenant_id': u'd3147020bd1f4a709654b7e62885bd9f', 
u'auth_token': u'***', u'request_id': 
u'req-89116778-b4fb-4232-8249-500f1db5d3f8', u'is_admin': True, u'user': 
u'5f228382dd8d4001bd079cfab624e870', u'timestamp': u'2016-09-12 
06:33:33.570294', u'tenant_name': u'zhaoyi', u'project_id': 
u'd3147020bd1f4a709654b7e62885bd9f', u'user_name': u'zhaoyi', u'tenant': 
u'd3147020bd1f4a709654b7e62885bd9f'} unpack_context 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:203
  2016-09-12 14:33:34.253 22722 DEBUG 
neutron_fwaas.services.firewall.fwaas_plugin 
[req-89116778-b4fb-4232-8249-500f1db5d3f8 ] firewall_deleted() called 
firewall_deleted 
/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/fwaas_plugin.py:62
  2016-09-12 14:33:34.260 22722 DEBUG neutron_fwaas.db.firewall.firewall_db 
[req-89116778-b4fb-4232-8249-500f1db5d3f8 ] delete_firewall() called 
delete_firewall 
/usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py:362
  2016-09-12 14:33:34.303 22759 DEBUG keystoneclient.session [-] RESP: [200] 
content-length: 3084 x-subject-token: 
{SHA1}fb0b83f9b5d3a4b459a1f2845c1a1bd4bba3c008 vary: X-Auth-Token date: Mon, 12 
Sep 2016 06:33:34 GMT content-type: application/json x-openstack-request-id: 
req-38cd15ab-f193-4d04-80bd-088638497b26
  RESP BODY: {"token": {"methods": ["password", "token"], "roles": [{"id": 
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, {"id": 
"bf49f6a34f5b4ec8843efee8a840a8b3", "name": "Member"}, {"id": 
"a104bc435a5d4031b9712dd702cb8672", "name": "user"}, {"id": 
"11b9aa45b311407ba9460b95eb1534c2", "name": "admin"}], "expires_at": 
"2016-09-12T07:03:12.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "d3147020bd1f4a709654b7e62885bd9f", "name": "zhaoyi"}, 
"catalog": "", "extras": {}, "user": {"domain": {"id": "default", 
"name": "Default"}, "id": "5f228382dd8d4001bd079cfab624e870", "name": 
"zhaoyi"}, "audit_ids": ["Q9kbnQe8SzOcZ7ig6_-2ew", "guy9k1VsThmCGwfavbmvjw"], 
"issued_at": "2016-09-12T06:03:12.784462"}}
   _http_log_response 
/usr/lib/python2.7/site-packages/keystoneclient/session.py:224
  2016-09-12 14:33:34.307 22759 DEBUG neutron_fwaas.db.firewall.firewall_db 
[req-b354359e-4536-43bd-a0ca-6ffc27ef72d7 ] get_firewall_rules() called 
get_firewall_rules 
/usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py:534
  2016-09-12 14:33:34.322 22759 INFO neutron.wsgi 
[req-b354359e-4536-43bd-a0ca-6ffc27ef72d7 ] 10.65.0.99,10.65.0.99 - - 
[12/Sep/2016 14:33:34] "GET /v2.0/fw/firewall_rules.json HTTP/1.1" 200 6555 
0.163946
  2016-09-12 14:33:34.447 22722 ERROR oslo_messaging.rpc.dispatcher 
[req-89116778-b4fb-4232-8249-500f1db5d3f8 ] Exception during message handling: 
Firewall 6278942e-6485-4ceb-92e5-8ddfa9fb4d25 could not be found.
  2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-09-12 14:33:34.447 22722 TRACE 

[Yahoo-eng-team] [Bug 1666864] [NEW] function does not return correct create router timestamp

2017-02-22 Thread Eran Kuris
Public bug reported:

Version: OSP-10 Newton
Test failed : 
neutron.tests.tempest.api.test_timestamp.TestTimeStampWithL3.test_show_router_attribute_with_timestamp

At creation the timestamp is correct: u'created_at':
u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z'

when "show" function display the timestamp it display it with ~3 sec difference:
u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:26Z'

show function display incorrect timestamp

218 def test_show_router_attribute_with_timestamp(self):
219 router = self.create_router(router_name='test')
220 import ipdb;ipdb.set_trace()
--> 221 body = self.client.show_router(router['id'])
222 show_router = body['router']
223 # verify the timestamp from creation and showed is same
224 import ipdb;ipdb.set_trace()
225 self.assertEqual(router['created_at'],
226  show_router['created_at'])

ipdb> router
{u'status': u'ACTIVE', u'external_gateway_info': None, 
u'availability_zone_hints': [], u'availability_zones': [], u'description': u'', 
u'admin_state_up': False, u'tenant_id': u'8b9cd1ebd13f4172a0c63789ee9c0de2', 
u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z', 
u'flavor_id': None, u'revision_number': 3, u'routes': [], u'project_id': 
u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'id': 
u'545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', u'name': u'test'}
ipdb> n
2017-02-22 10:47:55.084 6919 INFO tempest.lib.common.rest_client 
[req-eef7ded4-bb01-4401-96cd-325c01b2230b ] Request 
(TestTimeStampWithL3:test_show_router_attribute_with_timestamp): 200 GET 
http://10.0.0.104:9696/v2.0/routers/545e74b0-2f3d-43b8-8678-93bbb3f1f6f3 0.224s
2017-02-22 10:47:55.086 6919 DEBUG tempest.lib.common.rest_client 
[req-eef7ded4-bb01-4401-96cd-325c01b2230b ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
Body: None
Response - Headers: {'status': '200', u'content-length': '462', 
'content-location': 
'http://10.0.0.104:9696/v2.0/routers/545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', 
u'date': 'Wed, 22 Feb 2017 10:47:56 GMT', u'content-type': 'application/json', 
u'connection': 'close', u'x-openstack-request-id': 
'req-eef7ded4-bb01-4401-96cd-325c01b2230b'}
Body: {"router": {"status": "ACTIVE", "external_gateway_info": null, 
"availability_zone_hints": [], "availability_zones": ["nova"], "description": 
"", "admin_state_up": false, "tenant_id": "8b9cd1ebd13f4172a0c63789ee9c0de2", 
"created_at": "2017-02-22T10:47:24Z", "updated_at": "2017-02-22T10:47:26Z", 
"flavor_id": null, "revision_number": 8, "routes": [], "project_id": 
"8b9cd1ebd13f4172a0c63789ee9c0de2", "id": 
"545e74b0-2f3d-43b8-8678-93bbb3f1f6f3", "name": "test"}} _log_request_full 
tempest/lib/common/rest_client.py:431
> /home/centos/tempest-upstream/neutron/neutron/tests/tempest/api/test_timestamp.py(222)test_show_router_attribute_with_timestamp()


ipdb> router
{u'status': u'ACTIVE', u'external_gateway_info': None, 
u'availability_zone_hints': [], u'availability_zones': [], u'description': u'', 
u'admin_state_up': False, u'tenant_id': u'8b9cd1ebd13f4172a0c63789ee9c0de2', 
u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z', 
u'flavor_id': None, u'revision_number': 3, u'routes': [], u'project_id': 
u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'id': 
u'545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', u'name': u'test'}
ipdb> show_router
{u'status': u'ACTIVE', u'external_gateway_info': None, 
u'availability_zone_hints': [], u'availability_zones': [u'nova'], 
u'description': u'', u'admin_state_up': False, u'tenant_id': 
u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'created_at': u'2017-02-22T10:47:24Z', 
u'updated_at': u'2017-02-22T10:47:26Z', u'flavor_id': None, u'revision_number': 
8, u'routes': [], u'project_id': u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'id': 
u'545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', u'name': u'test'}
ipdb>

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666864

Title:
  function does not return correct create router timestamp

Status in neutron:
  New

Bug description:
  Version: OSP-10 Newton
  Test failed : 
neutron.tests.tempest.api.test_timestamp.TestTimeStampWithL3.test_show_router_attribute_with_timestamp

  At creation the timestamp is correct: u'created_at':
  u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z'

  when "show" function display the timestamp it display it with ~3 sec 
difference:
  u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:26Z'

  show function display incorrect timestamp

  218 def test_show_router_attribute_with_timestamp(self):
  219 router = self.create_router(router_name='test')
  220 import ipdb

[Yahoo-eng-team] [Bug 1666827] Re: Backport fixes for Rename Network return 403 Error

2017-02-22 Thread Louis Bouchard
** Also affects: horizon (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: horizon (Ubuntu Yakkety)
   Status: New => Fix Released

** Changed in: horizon (Ubuntu Trusty)
   Status: New => Triaged

** Changed in: horizon (Ubuntu Xenial)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1666827

Title:
  Backport fixes for Rename Network return 403 Error

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in horizon package in Ubuntu:
  New
Status in horizon source package in Trusty:
  Triaged
Status in horizon source package in Xenial:
  Triaged
Status in horizon source package in Yakkety:
  Fix Released

Bug description:
  [Impact]
  Non-admin users are not allowed to change the name of a network using the 
OpenStack Dashboard GUI

  [Test Case]
  1. Deploy trusty-mitaka or xenial-mitaka OpenStack Cloud
  2. Create demo project
  3. Create demo user
  4. Log into OpenStack Dashboard using demo user
  5. Go to Project -> Network and create a network
  6. Go to Project -> Network and Edit the just created network
  7. Change the name and click Save
  8. Observe that your request is denied with an error message

  [Regression Potential]
  Minimal.

  We are adding a patch already merged into upstream stable/mitaka for
  the horizon call to policy_check before sending request to Neutron
  when updating networks.

  The addition of rule "update_network:shared" to horizon's copy of
  Neutron policy.json is our own due to upstream not willing to back-
  port this required change. This rule is not referenced anywhere else
  in the code base so it will not affect other policy_check calls.

  Upstream bug: https://bugs.launchpad.net/horizon/+bug/1609467

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1666827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666827] Re: Backport fixes for Rename Network return 403 Error

2017-02-22 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1666827

Title:
  Backport fixes for Rename Network return 403 Error

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in horizon package in Ubuntu:
  New
Status in horizon source package in Trusty:
  Triaged
Status in horizon source package in Xenial:
  Triaged
Status in horizon source package in Yakkety:
  Fix Released

Bug description:
  [Impact]
  Non-admin users are not allowed to change the name of a network using the 
OpenStack Dashboard GUI

  [Test Case]
  1. Deploy trusty-mitaka or xenial-mitaka OpenStack Cloud
  2. Create demo project
  3. Create demo user
  4. Log into OpenStack Dashboard using demo user
  5. Go to Project -> Network and create a network
  6. Go to Project -> Network and Edit the just created network
  7. Change the name and click Save
  8. Observe that your request is denied with an error message

  [Regression Potential]
  Minimal.

  We are adding a patch already merged into upstream stable/mitaka for
  the horizon call to policy_check before sending request to Neutron
  when updating networks.

  The addition of rule "update_network:shared" to horizon's copy of
  Neutron policy.json is our own due to upstream not willing to back-
  port this required change. This rule is not referenced anywhere else
  in the code base so it will not affect other policy_check calls.

  Upstream bug: https://bugs.launchpad.net/horizon/+bug/1609467

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1666827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666827] Re: Backport fixes for Rename Network return 403 Error

2017-02-22 Thread Edward Hope-Morley
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1666827

Title:
  Backport fixes for Rename Network return 403 Error

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in horizon package in Ubuntu:
  New
Status in horizon source package in Trusty:
  Triaged
Status in horizon source package in Xenial:
  Triaged
Status in horizon source package in Yakkety:
  Fix Released

Bug description:
  [Impact]
  Non-admin users are not allowed to change the name of a network using the 
OpenStack Dashboard GUI

  [Test Case]
  1. Deploy trusty-mitaka or xenial-mitaka OpenStack Cloud
  2. Create demo project
  3. Create demo user
  4. Log into OpenStack Dashboard using demo user
  5. Go to Project -> Network and create a network
  6. Go to Project -> Network and Edit the just created network
  7. Change the name and click Save
  8. Observe that your request is denied with an error message

  [Regression Potential]
  Minimal.

  We are adding a patch already merged into upstream stable/mitaka for
  the horizon call to policy_check before sending request to Neutron
  when updating networks.

  The addition of rule "update_network:shared" to horizon's copy of
  Neutron policy.json is our own due to upstream not willing to back-
  port this required change. This rule is not referenced anywhere else
  in the code base so it will not affect other policy_check calls.

  Upstream bug: https://bugs.launchpad.net/horizon/+bug/1609467

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1666827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666831] [NEW] Nova recreates instance directory after migration/resize

2017-02-22 Thread Maciej Jozefczyk
Public bug reported:

Description
===
Nova recreates instance directory on source host after successful 
migration/resize when using QEMU Qcow2 file drives.


Nova after migration executes method driver.confirm_migration().
This method cleans instance directory (instance directory with suffix _resize):

nova/virt/libvirt/driver.py
1115 if os.path.exists(target):
1116 # Deletion can fail over NFS, so retry the deletion as 
required.
1117 # Set maximum attempt as 5, most test can remove the directory
1118 # for the second time.
1119 utils.execute('rm', '-rf', target, delay_on_retry=True,
1120   attempts=5)

After that Nova executes:
1122 root_disk = self.image_backend.by_name(instance, 'disk')

root_disk is used to remove rdb snapshots, but during execution of
self.image_backend.by_name() nova recreates instance directory.

Flow:

driver.confirm_migration()->self._cleanup_resize()->self.image_backend.by_name()
-> (nova/virt/libvirt/imagebackend.py)
image_backend.by_name()->Qcow2.__init__()->Qcow2.resolve_driver_format().

Qcow2.resolve_driver_format():
 344 if self.disk_info_path is not None:
 345 fileutils.ensure_tree(os.path.dirname(self.disk_info_path))
 346 write_to_disk_info_file()


Steps to reproduce
==

- spawn instance
- migrate/resize instance
- check that instance dir on old host still exists (example: 
/home/instances//disk.info


Expected result
===
After migration directory /home/instances/ and file 
/home/instances/ should not exist.

Actual result
=
Nova leaves instance directory after migration/resize.


Environment
===
1. Openstack Newton (it seems master is affected too).

2. Libvirt + KVM

3. Qcow2 file images on local disk.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666831

Title:
  Nova recreates instance directory after migration/resize

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Nova recreates instance directory on source host after successful 
migration/resize when using QEMU Qcow2 file drives.

  
  Nova after migration executes method driver.confirm_migration().
  This method cleans instance directory (instance directory with suffix 
_resize):

  nova/virt/libvirt/driver.py
  1115 if os.path.exists(target):
  1116 # Deletion can fail over NFS, so retry the deletion as 
required.
  1117 # Set maximum attempt as 5, most test can remove the 
directory
  1118 # for the second time.
  1119 utils.execute('rm', '-rf', target, delay_on_retry=True,
  1120   attempts=5)

  After that Nova executes:
  1122 root_disk = self.image_backend.by_name(instance, 'disk')

  root_disk is used to remove rdb snapshots, but during execution of
  self.image_backend.by_name() nova recreates instance directory.

  Flow:

  
driver.confirm_migration()->self._cleanup_resize()->self.image_backend.by_name()
  -> (nova/virt/libvirt/imagebackend.py)
  image_backend.by_name()->Qcow2.__init__()->Qcow2.resolve_driver_format().

  Qcow2.resolve_driver_format():
   344 if self.disk_info_path is not None:
   345 
fileutils.ensure_tree(os.path.dirname(self.disk_info_path))
   346 write_to_disk_info_file()

  
  Steps to reproduce
  ==

  - spawn instance
  - migrate/resize instance
  - check that instance dir on old host still exists (example: 
/home/instances//disk.info

  
  Expected result
  ===
  After migration directory /home/instances/ and file 
/home/instances/ should not exist.

  Actual result
  =
  Nova leaves instance directory after migration/resize.

  
  Environment
  ===
  1. Openstack Newton (it seems master is affected too).

  2. Libvirt + KVM

  3. Qcow2 file images on local disk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp