[Yahoo-eng-team] [Bug 1746217] [NEW] enabled vgpu types example is wrong

2018-01-30 Thread Naichuan Sun
Public bug reported:

`enabled_vgpu_types` flag has been set in nova.conf to set enabled vgpu type of 
current host. The example is wrong, it should be
 [devices]
 enabled_vgpu_types = ['GRID K100', 'Intel GVT-g', 'MxGPU.2', 'nvidia-11']
Check nova/conf/devices.py to see the example.

** Affects: nova
 Importance: Undecided
 Assignee: Naichuan Sun (naichuans)
 Status: In Progress


** Tags: vgpu

** Description changed:

- `enabled_vgpu_types` flag has been set in nova.conf to set enabled vgpu type 
of current host. The example is wrong, it should be 
-  [devices]
-  enabled_vgpu_types = ['GRID K100', 'Intel GVT-g', 'MxGPU.2', 'nvidia-11']
+ `enabled_vgpu_types` flag has been set in nova.conf to set enabled vgpu type 
of current host. The example is wrong, it should be
+  [devices]
+  enabled_vgpu_types = ['GRID K100', 'Intel GVT-g', 'MxGPU.2', 'nvidia-11']
+ Check nova/conf/devices.py to see the example.

** Tags added: vgpu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746217

Title:
  enabled vgpu types example is wrong

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  `enabled_vgpu_types` flag has been set in nova.conf to set enabled vgpu type 
of current host. The example is wrong, it should be
   [devices]
   enabled_vgpu_types = ['GRID K100', 'Intel GVT-g', 'MxGPU.2', 'nvidia-11']
  Check nova/conf/devices.py to see the example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709236] [NEW] Live migration failed in openstack on xenserver

2017-08-07 Thread Naichuan Sun
Public bug reported:

Live migration failed on xenserver, error InstanceActionNotFound.
ul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server [None req-a06c9561-0458-43c6-b767-08bf67e38b07 admin 
admin] Exception during message handling: InstanceActionNotFound: Action for 
request_id req-a06c9561-0458-43c6-b767-08bf67e38b07 on instance 
8a8ed9ad-2fb8-46d4-bee2-9d947e2d3e58 not found
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server Traceback (most recent call last):
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, 
args)
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispatch
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server result = func(ctxt, **new_args)
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/utils.py", line 
863, in decorated_function
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server with EventReporter(context, event_name, 
instance_uuid):
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/utils.py", line 
834, in __enter__
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server self.context, uuid, self.event_name, 
want_result=False)
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server result = fn(cls, context, *args, **kwargs)
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/objects/instance_action.py", line 169, in event_start
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server db_event = db.action_event_start(context, values)
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/db/api.py", line 1958, 
in action_event_start
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server return IMPL.action_event_start(context, values)
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/db/sqlalchemy/api.py", 
line 250, in wrapped
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server return f(context, *args, **kwargs)
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/db/sqlalchemy/api.py", 
line 6155, in action_event_start
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server instance_uuid=values['instance_uuid'])
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server InstanceActionNotFound: Action for request_id 
req-a06c9561-0458-43c6-b767-08bf67e38b07 on instance 
8a8ed9ad-2fb8-46d4-bee2-9d947e2d3e58 not found
Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server

** Affects: nova
 Importance: Undecided
 Status: New

** Project changed: linuxmint => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709236

Title:
  Live migration failed in openstack on xenserver

Status in OpenStack Compute (nova):
  New

Bug description:
  Live migration failed on xenserver, error InstanceActionNotFound.
  ul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server [None req-a06c9561-0458-43c6-b767-08bf67e38b07 admin 
admin] Exception during message handling: InstanceActionNotFound: Action for 
request_id req-a06c9561-0458-43c6-b767-08bf67e38b07 on instance 
8a8ed9ad-2fb8-46d4-bee2-9d947e2d3e58 not found
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server Traceback (most recent call last):
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 

[Yahoo-eng-team] [Bug 1700926] Re: Exception Error logs shown in Citrix XenServer CI

2017-06-28 Thread Naichuan Sun
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1700926

Title:
  Exception Error logs shown in Citrix XenServer CI

Status in OpenStack Compute (nova):
  New
Status in os-xenapi:
  In Progress

Bug description:
  From a patch https://review.openstack.org/#/c/459485/ which passed our 
XenServer CI, we can see there are some errors in nova 
http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/85/459485/4/check/dsvm-tempest-neutron-network/61927a4/logs/screen-n-cpu.txt.gz
  We need to figure out whether this is real problem or just configuration 
problem
  2017-04-25 03:41:04.891 5135 DEBUG nova.virt.xenapi.vm_utils 
[req-600bd486-ca26-4a98-bcae-9ed50cea1f6b 
tempest-ImagesOneServerNegativeTestJSON-218904050 
tempest-ImagesOneServerNegativeTestJSON-218904050] VHD 
00e1b23b-9926-477a-a766-6e792d1a435d has parent 
5f75df8a-b35f-447b-a731-7b1e12297271 _get_vhd_parent_uuid 
/opt/stack/new/nova/nova/virt/xenapi/vm_utils.py:1979
  2017-04-25 03:41:04.895 5135 DEBUG nova.virt.xenapi.vm_utils 
[req-600bd486-ca26-4a98-bcae-9ed50cea1f6b 
tempest-ImagesOneServerNegativeTestJSON-218904050 
tempest-ImagesOneServerNegativeTestJSON-218904050] VHD 
5f75df8a-b35f-447b-a731-7b1e12297271 has parent 
1afecff1-7ee0-405a-9f87-5e2b674657b2 _get_vhd_parent_uuid 
/opt/stack/new/nova/nova/virt/xenapi/vm_utils.py:1979
  2017-04-25 03:41:04.988 5135 DEBUG os_xenapi.client.session 
[req-600bd486-ca26-4a98-bcae-9ed50cea1f6b 
tempest-ImagesOneServerNegativeTestJSON-218904050 
tempest-ImagesOneServerNegativeTestJSON-218904050] glance.py.upload_vhd2 
attempt 1/1, callback_result: http://192.168.33.1:9292 
call_plugin_serialized_with_retry 
/opt/stack/new/os-xenapi/os_xenapi/client/session.py:242
  2017-04-25 03:41:05.233 5135 DEBUG os_xenapi.client.session 
[req-600bd486-ca26-4a98-bcae-9ed50cea1f6b 
tempest-ImagesOneServerNegativeTestJSON-218904050 
tempest-ImagesOneServerNegativeTestJSON-218904050] Got exception: 
['XENAPI_PLUGIN_FAILURE', 'upload_vhd2', 'PluginError', 'Got Permanent Error 
response [404] while uploading image [244e93c0-28f1-4951-b922-665908bef7d5] to 
glance 
http://192.168.33.1:9292/v2/images/244e93c0-28f1-4951-b922-665908bef7d5/file'] 
_unwrap_plugin_exceptions 
/opt/stack/new/os-xenapi/os_xenapi/client/session.py:295
  2017-04-25 03:41:05.448 5135 DEBUG nova.compute.manager 
[req-600bd486-ca26-4a98-bcae-9ed50cea1f6b 
tempest-ImagesOneServerNegativeTestJSON-218904050 
tempest-ImagesOneServerNegativeTestJSON-218904050] [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] Cleaning up image 
244e93c0-28f1-4951-b922-665908bef7d5 decorated_function 
/opt/stack/new/nova/nova/compute/manager.py:235
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] Traceback (most recent call last):
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] File 
"/opt/stack/new/nova/nova/compute/manager.py", line 231, in decorated_function
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] *args, **kwargs)
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3128, in snapshot_instance
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] task_states.IMAGE_SNAPSHOT)
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3160, in _snapshot_instance
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] update_task_state)
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] File 
"/opt/stack/new/nova/nova/virt/xenapi/driver.py", line 198, in snapshot
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] self._vmops.snapshot(context, instance, 
image_id, update_task_state)
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] File 
"/opt/stack/new/nova/nova/virt/xenapi/vmops.py", line 928, in snapshot
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] vdi_uuids,
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] File 
"/opt/stack/new/nova/nova/virt/xenapi/image/glance.py", line 94, in upload_image
  2017-04-25 03:41:05.448 5135 ERROR nova.compute.manager [instance: 
92c76bb9-8dfb-4b35-9ee4-137ec23b82d5] 'upload_vhd2', params)
  2017-04-25 03:41:05.448 5135 ERROR