[Yahoo-eng-team] [Bug 1719069] [NEW] Can't disable gateway when use subnet pool to create subnet

2017-09-23 Thread wei.ying
Public bug reported:

Env: devstack master branch

Desc:
In create subnet form, if the "Network Address Source" drop-down box selects 
"Allocate Network Address from a pool", check out Disable Gateway, and the 
subnet information that has been created contains the gateway IP.

** Affects: horizon
 Importance: Undecided
 Assignee: wei.ying (wei.yy)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => wei.ying (wei.yy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1719069

Title:
  Can't disable gateway when use subnet pool to create subnet

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Env: devstack master branch

  Desc:
  In create subnet form, if the "Network Address Source" drop-down box selects 
"Allocate Network Address from a pool", check out Disable Gateway, and the 
subnet information that has been created contains the gateway IP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1719069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719141] [NEW] Kick off Ansible Playbook from Keystone Actions

2017-09-23 Thread Adam Young
Public bug reported:

When a Federated User logs in for the first time, many organizations
want to be able to provision resources.  This is a specific instance of
the general idea that a Keystone token operation should be able to kick
off a playbook.  PLaybooks can perform both Openstack specific actions
such as project create, as well as nont - Openstack issues, such as
creating resources in third party systems like LDAP.

** Affects: keystone
 Importance: Wishlist
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1719141

Title:
  Kick off Ansible Playbook from Keystone Actions

Status in OpenStack Identity (keystone):
  New

Bug description:
  When a Federated User logs in for the first time, many organizations
  want to be able to provision resources.  This is a specific instance
  of the general idea that a Keystone token operation should be able to
  kick off a playbook.  PLaybooks can perform both Openstack specific
  actions such as project create, as well as nont - Openstack issues,
  such as creating resources in third party systems like LDAP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1719141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1698253] Re: VMware vSphere in Configuration Reference: InvalidInput: Invalid input received: vif type bridge not supported

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1698253

Title:
  VMware vSphere in Configuration Reference: InvalidInput: Invalid input
  received: vif type bridge not supported

Status in OpenStack Compute (nova):
  Expired

Bug description:
  InvalidInput: Invalid input received: vif type bridge not supported

  Gives an error while launching and instance and the code is not updated in 
openstack newton when using linux bridge as mechanism driver instead or OVS or 
DVS
  The code needs to fixed.
  There are minor changes in vif.py file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1698253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687073] Re: Keystone Memory usage remains high

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1687073

Title:
  Keystone Memory usage remains high

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  I found something interesting while doing a quick load test of
  keystone / newton . When I started the load test the memory usage for
  keystone processes (admin and public wsgi) went up – and it never came
  down even hours after the test is stopped.  Also found few errors in
  the log (given below ) .  E

  Also, found that many functions in resource/backends/sql.py are not closing 
the sessions once open . 
  Do we need to close the sessions explicitly ? Is that the reason for 
persistent high memory usage ? 
  Below error is thrown during the test . I guess the error may be due to 
settings in keystone.conf. Not sure it has anything to do with memory cleanup .
  Attached is the script to execute stress test . It will launch 40 threads, 
and hit keystone at the same time 
  --
  Error-

  2017-04-28 14:17:20.702 653 INFO keystone.common.wsgi 
[req-651d1776-9e5c-405c-82d2-3efe7dbcd5f3 - - - - -] POST 
http://10.10.10.2:5000/v3/auth/tokens
  2017-04-28 14:17:20.878 691 INFO keystone.common.wsgi 
[req-8bd6baa6-976d-41b5-817a-554b3a7d6c54 - - - - -] GET 
http://192.168.204.2:35357/v3/
  2017-04-28 14:17:20.898 691 INFO keystone.common.wsgi 
[req-da74eaeb-34d9-4190-8477-fd14a16fab3f b94369832d4d41cea555a9e98c216dd7 
f9f5aa29f7994730b0fc845aaba5ade5 - default default] GET 
http://192.168.204.2:35357/v3/projects
  2017-04-28 14:17:20.915 692 INFO keystone.common.wsgi 
[req-38122a48-2c97-4b41-b3a0-b2a9062c68ac - - - - -] POST 
http://10.10.10.2:5000/v3/auth/tokens
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi 
[req-2208fddc-6801-4a9c-a6fd-22cfd310427d - - - - -] QueuePool limit of size 1 
overflow 10 reached, connection timed out, timeout 30
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi Traceback (most recent 
call last):
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi result = 
method(req, **params)
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 397, in 
authenticate_for_token
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi auth_info = 
AuthInfo.create(auth=auth)
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 137, in 
create
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi 
auth_info._validate_and_normalize_auth_data(scope_only)
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 310, in 
_validate_and_normalize_auth_data
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi 
self._validate_and_normalize_scope_data()
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 252, in 
_validate_and_normalize_scope_data
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi project_ref = 
self._lookup_project(self.auth['scope']['project'])
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 215, in 
_lookup_project
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi domain_ref = 
self._lookup_domain(project_info['domain'])
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 189, in 
_lookup_domain
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi domain_ref = 
self.resource_api.get_domain(domain_id)
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in 
wrapped
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1220, in 
decorate
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi should_cache_fn)
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 825, in 
get_or_create
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi async_creator) as 
value:
  2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/dogpile/

[Yahoo-eng-team] [Bug 1685010] Re: Able to spawn > max_instances_per_host with NumInstancesFilter

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1685010

Title:
  Able to spawn > max_instances_per_host with NumInstancesFilter

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Description of problem:
  While attempting to achieve even distribution of tiny instances across 31 
compute nodes (Some with 64GiB and some with 128GiB of memory) I tried to use 
the NumInstancesFilter to limit the number of instance per each compute to 2 
instances. (Total limit of 62 instances for this testbed then 31 x 2 = 62)  I 
then launched 70 guests serially (concurrency of 1) using a rally scenario 
built for this with persisting rally instances.  67 instances were able to be 
booted and only 3 failed.

  Two hosts ended up with 4 instances, one ended up with 3 instances and
  the rest with 2 instances as they should have.

  
  I used the following settings in nova.conf on all controllers (3):
  [filter_scheduler]
  host_subset_size = 4
  max_instances_per_host = 2
  enabled_filters = NumInstancesFilter,RetryFilter,RamFilter,ComputeFilter
  ram_weight_multiplier = 0

  After setting above I had restarted services:
  - openstack-nova-scheduler
  - openstack-nova-api
  - openstack-nova-conductor
  - openstack-nova-novncproxy
  - openstack-nova-consoleauth
  - httpd

  *httpd hosts the nova placement api


  Version-Release number of selected component (if applicable):
  OpenStack Ocata
  python-nova-15.0.2-1.el7ost.noarch
  openstack-nova-cert-15.0.2-1.el7ost.noarch
  openstack-nova-console-15.0.2-1.el7ost.noarch
  puppet-nova-10.4.0-3.el7ost.noarch
  openstack-nova-novncproxy-15.0.2-1.el7ost.noarch
  openstack-nova-placement-api-15.0.2-1.el7ost.noarch
  python-novaclient-7.1.0-1.el7ost.noarch
  openstack-nova-common-15.0.2-1.el7ost.noarch
  openstack-nova-scheduler-15.0.2-1.el7ost.noarch
  openstack-nova-conductor-15.0.2-1.el7ost.noarch
  openstack-nova-compute-15.0.2-1.el7ost.noarch
  openstack-nova-api-15.0.2-1.el7ost.noarch

  
  How reproducible:
  Produced above result once.  Unsure if reproduces every single time due to 
time limitations on testbed.

  Steps to Reproduce:
  1. Set max_instances_per_host to 2, set enabled_filters to include 
NumInstancesFilter
  2. Restart nova services, attempt to boot > 
max_instances_per_host*$HOST_COUNT and witness more instances than should be 
possible
  3.

  Actual results:
  Three hosts had > max_instances_per_host

  Expected results:
  Only 62 instances to be booted

  Additional info:

  Perhaps I configured something wrong with Nova?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1685010/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694591] Re: Horizon gives 401 authorization error after oidc configuration

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1694591

Title:
  Horizon gives 401 authorization error after oidc configuration

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  I have configured OIDC with keystone.
  I have followed the steps mentioned in the official documentation. But when i 
try to log into horizon, i get a 401 error:
  {"error": {"message": "The request you have made requires authentication.", 
"code": 401, "title": "Unauthorized"}}

  The OIDC configuration is as shown below:
  # Configure OIDC
  OIDCClaimPrefix "OIDC-"
  OIDCResponseType "id_token"
  OIDCScope "openid email profile"
  OIDCProviderMetadataURL 
https://accounts.google.com/.well-known/openid-configuration
  OIDCClientID 
  OIDCClientSecret 
  OIDCCryptoPassphrase openstack
  OIDCRedirectURI http:///identity/v3/OS-FEDERATION/identity_providers/myidp/protocols/mapped/auth
  OIDCRedirectURI http:///identity/v3/auth/OS-FEDERATION/websso
  OIDCRedirectURI http:///identity/v3/auth/OS-FEDERATION/identity_providers/myidp/protocols/mapped/websso

  # For keystone
  
AuthType openid-connect
Require valid-user
LogLevel debug
  

  # For horizon
  
AuthType openid-connect
Require valid-user
  
  
AuthType openid-connect
Require valid-user
  

  
  source accr/admin/admin
  export OS_IDENTITY_API_VERSION=3
  openstack domain create federated_domain
  openstack group create federated_users
  openstack role add --group federated_users --domain federated_domain admin
  openstack identity provider create --remote-id https://accounts.google.com 
myidp

  export remote_type=REMOTE_USER
  export remote_type=HTTP_OIDC_EMAIL
  cat > rules.json 

[Yahoo-eng-team] [Bug 1688536] Re: Backspace and enter not working in Instance Console

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1688536

Title:
  Backspace and enter not working in Instance Console

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Hi,
  Instance Console won't accept backspace, arrow, delete or enter keystrokes. 
Latest Openstack, Tested on IE, Chrome and FF.
  PS I am totally new to OpenStack so just starting out learning it.
  Thanks

  OpenStack release v16.0.0
  IOnstalled using https://docs.openstack.org/developer/devstack/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1688536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1676737] Re: nova list error when cloud has many instances and some instances is deleting.

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1676737

Title:
  nova list error when cloud has many instances and some instances is
  deleting.

Status in OpenStack Compute (nova):
  Expired

Bug description:
  1. Openstack cloud has many instances.

  2. I delete some instances. At the same time, I execuate 'nova list'.

  3. 2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack return 
resp(environ, start_response)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 634, in __call__
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 554, in _call_app
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack return 
resp(environ, start_response)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack return 
resp(environ, start_response)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/routes/middleware.py", line 131, in __call__
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack return 
resp(environ, start_response)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 756, in 
__call__
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack content_type, body, 
accept)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 847, in 
_process_stack
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack request, 
action_args)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 710, in 
post_process_extensions
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack **action_args)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/contrib/extended_server_attributes.py",
 line 78, in detail
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack instances.values())
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 4230, in 
get_instances_host_statuses
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack host_status = 
self.get_instance_host_status(instance)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 4210, in 
get_instance_host_status
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack service = [service 
for service in instance.services if
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 72, in getter
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack 
self.obj_load_attr(name)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 1164, in 
obj_load_attr
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack 
self._load_generic(attrname)
  2017-03-21 10:19:23.248 3713 TRACE nova.api.openstack   File 
"/usr/lib/py

[Yahoo-eng-team] [Bug 1609298] Re: libvirt should not require dynamic_ownership off for secure Cinder/Quobyte settings

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1609298

Title:
  libvirt should not require dynamic_ownership off for secure
  Cinder/Quobyte settings

Status in OpenStack Compute (nova):
  Expired

Bug description:
  tl;dr
  When running Quobyte Cinder storage with nas_secure_file_* settings set to 
true libvirt is currently required to be configured with dynamic_ownership=0 
(off). This is not recommended with Nova.

  Expected results: secure settings in Cinder should work with Nova and 
unmodified dynamic_ownership in libvirt config
  Actual results: The option in libvirt is required

  
  More detailed:
  When run with dynamic_ownership=1 libvirt changes file ownership on guest 
files to root:root at some point. Running Cinder with the Quobyte driver in 
nas_secure_file_ownership / nas_secure_file_permissions = true conflicts with 
this: In secure mode image files belong to the nova/cinder service users (both 
in a common group) and file permissions are 660 (instead of running 
root:root/666 as is the insecure mode for these cinder options). When libvirt 
changes the files ownership to root:root nova/cinder cannot access those files 
any longer, hurting e.g. snapshots and the like.

  A correction proposal was made by Daniel Berrange at 
https://bugs.launchpad.net/nova/+bug/1597644/comments/22 :
  "[..]If so, a much better approach is to enhance nova so that it can set a 
 element against *just* the quobyte backed disks, that tells libvirt 
to skip ownership changes for those disks. That way operation of libvirt / QEMU 
in general will not be affect, thus avoiding nasty side-effects such as this 
console.log problem.[..]"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1609298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607574] Re: Calling of get_mac_by_pci_address() missed True 'pf_interface'

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1607574

Title:
  Calling of get_mac_by_pci_address() missed True 'pf_interface'

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Calling of get_mac_by_pci_address in _populate_pci_mac_address()
  missed parameter 'pf_interface' of True although 'pci_dev.dev_type ==
  obj_fields.PciDeviceType.SRIOV_PF', and this will result in
  incorrectly 'dev_path' and incorrectly 'if_name' after calling of
  get_mac_by_pci_address().

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1607574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583999] Re: BDM is not deleted if an instance booted from volume and failed on schedule stage

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583999

Title:
  BDM  is not deleted if an instance booted from volume and failed on
  schedule stage

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Description
  

  I did some test on boot from volume instance. I found that sometime
  the instance boot from volume will fail on evacuate operation. After
  some dig, I found evacuate operation failed due to the conductor
  service returned wrong block device mapping which has no connection
  info. After some more dig, I found there are some BDM should NOT
  exists because it belongs to a deleted instance. After some more test,
  I found a way to reproduce this problem.

  Steps to reproduce
  
  1, create a volume from image (image-volume1)
  2, stop or disable all nova-compute
  3, boot an instance (bfv1) from volume (image-volume1)
  4, wait the instance became ERROR state
  5, delete the instance will just created
  6, look at block_device_mapping table of nova database and found instance's 
block device mapping still exists
  7, boot another instance (bfv2) from volume (image-volume1)
  8, execute evacuate operation on bfv2
  9, evacuate operation failed and bfv2 became ERROR.

  Environment
  
  * centos 7
  * liberty openstack

  I looked at the master branch code. This bug still exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592241] Re: memory_mb_used of compute node do not consider reserved_huge_pages

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1592241

Title:
  memory_mb_used of compute node do not consider reserved_huge_pages

Status in OpenStack Compute (nova):
  Expired

Bug description:
  version: master
  question:
  memory_mb_used of compute node only considers CONF.reserved_host_memory_mb. 
Now memory_mb_used is equal to sum of memory_mb which all instances used and 
CONF.reserved_host_memory_mb, But do not consider CONF.reserved_huge_pages

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1592241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582543] Re: Pre live-migration failed cannot rollback source connection information

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582543

Title:
  Pre  live-migration failed  cannot rollback  source connection
  information

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Description:
  Boot vm from volume, if when pre_live_migration, the bdm connection_info has 
been updatated as the dest connection_info. So, if pre_live_migration failed, 
the _rollback_live_migration should be updated
  the source host connection_info to bdm table. Otherwise, the virtual machine 
migration failure can not work properly.
  Steps to reproduce:
  1. Boot vm from volume
  2. Construction of pre migration failed.
  3. Run nova live-migration vm
  4. The vm looks like good, But if you hard reboot the vm, the vm will be 
anomaly.

  Expected result:
  After vm live-migration failed, the vm can be ok.

  Actual result:
  As for the vm bdm connection_info was updated to the dest information. But 
the virsh process was still in source host. So, the vm's hard-reboot,stop,start 
actions are not ok.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1582543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587702] Re: Nova doesn't check service type before deleting compute service

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1587702

Title:
  Nova doesn't check  service type before deleting compute service

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Now nova service delete is using the same logic for compute service
  and the others. See
  
https://github.com/openstack/nova/blame/stable/kilo/nova/db/sqlalchemy/api.py#L446

  And as a result, if there is any service running associated with
  compute service(on the same host), then when delete the service, the
  compute service will be deleted as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1587702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579667] Re: delete an shelved_offloaded server cause failure in cinder

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1579667

Title:
  delete an shelved_offloaded server cause failure in cinder

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When deleting an shelved_offloaded STATE VM instance with volume
  attached, nova passes a connector dictionary:

    connector = {'ip': '127.0.0.1', 'initiator': 'iqn.fake'}

  to cinder for terminate connnection, this causes KeyError in cinder driver
  code :
  https://github.com/openstack/nova/blame/master/nova/compute/api.py#L1803

  def _local_cleanup_bdm_volumes(self, bdms, instance, context):
  1804  """The method deletes the bdm records and, if a bdm is a 
volume, call
  1805  the terminate connection and the detach volume via the Volume 
API.
  1806  Note that at this point we do not have the information about the
  1807  correct connector so we pass a fake one.
  1808  """
  1809  elevated = context.elevated()
  1810  for bdm in bdms:
  1811  if bdm.is_volume:
  1812  # NOTE(vish): We don't have access to correct volume
  1813  # connector info, so just pass a fake
  1814  # connector. This can be improved when we
  1815  # expose get_volume_connector to rpc.
  1816  connector = {'ip': '127.0.0.1', 'initiator': 'iqn.fake'}
  1817  try:
  1818  self.volume_api.terminate_connection(context,
  1819   bdm.volume_id,
  1820   connector)
  1821  self.volume_api.detach(elevated, bdm.volume_id,
  1822 instance.uuid)
  1823  if bdm.delete_on_termination:
  1824  self.volume_api.delete(context, bdm.volume_id)
  1825  except Exception as exc:
  1826  err_str = _LW("Ignoring volume cleanup failure due 
to %s")
  1827  LOG.warn(err_str % exc, instance=instance)
  1828  bdm.destroy()
  1829
  https://github.com/openstack/nova/blame/master/nova/compute/api.py#L1828

  according to my debug, the connector info for terminate_connection is
  already there( in bdm object):

  so Nova should build correct connection info for terminate_connection.

  ==Steps to reproduce

  1. create a server: nova boot 
  2. shelve the server: nova shelve 
  3. delete the server: nova delete 


  Thanks
  Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1579667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614148] Re: sanitize_hostname() fails to account for domain part

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1614148

Title:
  sanitize_hostname() fails to account for domain part

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The function nova.utils.sanitize_hostname() sanitizes instance names to
  make them suitable for use as host names. Among other things it contains
  the function truncate_hostname() that truncates host names to a maximum
  length of 63 characters. Unfortunately this truncation does not take
  into account the host names' domain part (DEFAULT/dhcp_domain in
  nova.conf).

  Consequently, a 63 character host name plus a domain part (e.g.
  `.novalocal`) will yield a 73 character net host name passed to
  cloud-init inside the instance, which can cause problems with host name
  setting code (this can prevent instances from deploying properly, see
  https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1432758 ).

  The Heat project has code to handle this kind of problem, but it's more
  of a stopgap measure:

  
https://github.com/openstack/heat/commit/8ac7fa02063386a8eb73380d83261f7174781383

  I think the better place to fix this is Nova. Unlike Heat, Nova knows
  the domain name it uses and can truncate host names enough to leave room
  for the domain name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1614148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567843] Re: When VM creat failed, do not unplug-ports

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567843

Title:
  When  VM creat failed, do not unplug-ports

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When  VM create failed (such as  "_create_domain" execute failed),  do
  not unplug-ports.   This bug hit all release/branch.

  When create the VM, It will update network-info at the process 
"update_instance_cache_with_nw_info".
  But It only update the data in Data-base. the InstanceInfoCache of the local 
variable(instance) do not be updated.

  If the VM is created failed at process "libvirt.driver._create_domain".
  when shutdown the instance,it can not get  the correct network information
  So can not unplug-vif correctlly, there are some information is left

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586577] Re: rebuild instance booted from volume failed

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1586577

Title:
  rebuild instance booted from volume failed

Status in OpenStack Compute (nova):
  Expired

Bug description:
  reproduce:

  In Ceph backend cinder volume

  1. boot a instance with a volume(ImageA)
  2. rebuild this instance with ImageB

  result:

  The vm keep as it used to be with ImageA and all old data in it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1586577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596535] Re: Cannot attach a DHCP network to VM

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596535

Title:
  Cannot attach a DHCP network to VM

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When a network is created and an attempt is made to attach a network 
interface to a VM, it fails with an error :
  2016-06-27 03:26:14.857 32668 ERROR nova.api.openstack.extensions File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 600, in 
allocate_for_instance
  2016-06-27 03:26:14.857 32668 ERROR nova.api.openstack.extensions raise 
exception.SecurityGroupCannotBeApplied()

  Debugging, the code breaking is :
  port_security_enabled = network.get('port_security_enabled', True)

  and network is :
  (Pdb) p network
  {u'status': u'ACTIVE', u'subnets': [], u'availability_zone_hints': [], 
u'availability_zones': [], u'name': u'DHCP', u'provider:physical_network': 
u'default0', u'admin_state_up': True, u'tenant_id': 
u'500dc4679e6f4063a47ac3c17728085f', u'created_at': u'2016-06-27T07:25:55', 
u'tags': [], u'updated_at': u'2016-06-27T07:25:55', 
u'provider:segmentation_id': 300, u'ipv6_address_scope': None, 
u'router:external': False, u'ipv4_address_scope': None, u'id': 
u'815b219c-272e-4cb6-8711-c17df5e0894e', u'shared': False, 
u'provider:network_type': u'vlan', u'mtu': 1500, u'description': u''}
  (Pub)

  this code returns True in this context. port_security_enabled is not
  present in the network dictionary.

  This code needs to be :
  port_security_enabled = network.get('port_security_enabled')

  I do not see port_security_enabled is mandatory for a network, so code
  should handle this scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1596535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604384] Re: objects.Migration's status is 'accepted', when vm rebuild failed

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604384

Title:
  objects.Migration's status is 'accepted', when vm rebuild failed

Status in OpenStack Compute (nova):
  Expired

Bug description:
  If there is no vaild host. rebuild VMs. the status of
  objects.Migration is 'accepted'.it should be "failed".

  objects.Migration is created in nova.api. it's initial status should be 
'accepted'
  In nova.conductor, When occurrence NoValidHost or UnsupportedPolicyException, 
 there's nothing to do about Migration .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1604384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590556] Re: race condition with resize causing old resources not to be free

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590556

Title:
  race condition with resize causing old resources not to be  free

Status in OpenStack Compute (nova):
  Expired

Bug description:
  While I was working on fixing the resize for pci passthrough [1] I
  have notice the following issue in resize.


  If you are using small image and you resize-confirm it very fast the
  old resources are not getting freed.


  After debug this issue I found out the root cause of it.


  A Good run of resize is as detailed below:


  When doing resize the _update_usage_from_migration in the resource
  trucker called twice.

  1.   The first call we return  the instance type of the new flavor
  and will enter this case

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L718

  2.   Then it will put in the tracked_migrations the migration and
  the new instance_type

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

  3.   The second call we return the old  instance_type and will
  enter this case

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L725

  4.   Then in the tracked_migrations it will overwrite  the old
  value with migration and the old instance type

  5.
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

  6.   When doing resize-confirm the drop_move_claim called with the
  old instance type

  
https://github.com/openstack/nova/blob/9a05d38f48ef0f630c5e49e332075b273cee38b9/nova/compute/manager.py#L3369

  7.   The drop_move_claim will compare the instance_type[id] from
  the tracked_migrations to the instance_type.id (which is the old one)

  8.   And because they are equals it will  remove the old resource
  usage

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L328


  But with small image like CirrOS   and doing the revert-confirm fast
  the second call of _update_usage_from_migration will not get
  executing.

  The result is that when we enter the drop_move_claim it compares it
  with the new instance_type and this  expression is false
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L314

  This mean that this code block is not executed
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L326
  and therefore old resources are not getting freed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552265] Re: can't delete a error instance(boot from volume )

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1552265

Title:
  can't delete a error instance(boot from volume )

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The Reproduce steps:

  I create a volume from image(the cinder backend is HDS FC Storage).

  Then I boot a instance from the volume, the call path is "nova-api => 
nova-compute => cinder-api => cinder-volume".
  Because the HDS Driver in cinder-volume is slow in my env, so nova-compute 
will timeout and set the status of instance as "error".

  Then I try to delete the instance, because the value of 'os-extended-
  volumes:volumes_attached' in the instance is the volume id, so nova-
  compute will call detach-volume API in cinder-api.  Because the status
  of volume in cinder database is 'available', so the cinder-api will
  raise a 'VolumeAttachmentNotFound' exception. Finally, I failed to
  delete the error instance.

  Solution:

  nova-compute need to check the status of volume before call detach-
  volume api to cinder-api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1552265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540526] Re: Too many lazy-loads in predictable situations

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1540526

Title:
  Too many lazy-loads in predictable situations

Status in OpenStack Compute (nova):
  Expired

Bug description:
  During a normal tempest run, way (way) too many object lazy-loads are
  being triggered, which causes extra RPC and database traffic. In a
  given tempest run, we should be able to pretty much prevent any lazy-
  loads in that predictable situation. The only case where we might want
  to have some is where we are iterating objects and conditionally
  taking action that needs to load extra information.

  On a random devstack-tempest job run sampled on 1-Feb-2016, a lot of
  lazy loads were seen:

grep 'Lazy-loading' screen-n-cpu.txt.gz  -c
624

  We should be able to vastly reduce this number without much work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1540526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551154] Re: Snapshot of VM with SRIOV port fails if host PF is down

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551154

Title:
  Snapshot of VM with SRIOV port fails if host PF is down

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Scenario:
  
  1. Create a VM with SRIOV port --> the vNIC state should be UP
  2. On the VM host, shutdown the PF (i.e. ifconfig ens1f1 down) --> The vNIC 
state should be DOWN
  3. Create a snapshot of this suboptimal VM

  Expected behavior:
  
  Snapshot creation process begins, after a while the snapshot succeeds, VM 
should be up without the vNIC (if the PF is still down)

  Actual behavior:
  ---
  Snapshot creation process begins, after a while the snapshot fails and is 
deleted. The VM is up without the vNIC (since the PF is still down)

  Analysis:
  ---
  From nova-compute.log

  2016-02-29 07:04:05.205 34455 WARNING nova.virt.libvirt.driver 
[req-3f00299f-1c82-4bb4-9283-81dc1d11383d 6dc69ac7e48549e0a6f0577e870a096e 
7e5da331cb6342fca3688ea4b4df01f4 - - -] Performing standard snap
  shot because direct snapshot failed: Image 
cd938ad4-6e0a-46f3-bc1c-d94644b3fef5 is unacceptable: direct_snapshot() is not 
implemented
  2016-02-29 07:07:43.834 34455 ERROR oslo_messaging.rpc.dispatcher 
[req-3f00299f-1c82-4bb4-9283-81dc1d11383d 6dc69ac7e48549e0a6f0577e870a096e 
7e5da331cb6342fca3688ea4b4df01f4 - - -] Exception during mess
  age handling: internal error: Unable to configure VF 62 of PF 'ens1f1' 
because the PF is not online. Please change host network config to put the PF 
online.
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6917, in 
snapshot_instance
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher return 
self.manager.snapshot_instance(ctxt, image_id, instance)
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher payload)
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 341, in 
decorated_function
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 312, in 
decorated_function
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 369, in 
decorated_function
  2016-02-29 07:07:43.834 34455 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2016-02-29 07:07:43.834

[Yahoo-eng-team] [Bug 1535643] Re: rollback on destination after failed live migration is not called always

2017-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1535643

Title:
  rollback on destination after failed live migration is not called
  always

Status in OpenStack Compute (nova):
  Expired

Bug description:
  1. Live migrate instance from host A to B
  2. The live migration fails and _rollback_live_migration is called. Here if 
the remove_volume_connection throw an error the 
rollback_live_migration_at_destination is not called.
  3. This could leave resources not cleaned up on the destination.

  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L5489
  any exception from the remove_volume_connection should be handled and
  the rollback on destination should be called.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1535643/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp