[Yahoo-eng-team] [Bug 1632247] [NEW] The command "nova list --all-tenants" query is slow

2016-10-11 Thread Tina Kevin
Public bug reported:

Description
===
There are 15 instances. I excute the command: nova --debug list --all-tenants,
But it took more than 40 seconds. I read the nova api code, it sends a Get 
request
and read the instance_faults table for detail information. The instance_faults 
table has
 more than tens of thousands of records. Each instance has many records.

   GET /v2/433288e1244046a9bd306658b732dded/servers/detail

I think instance_faults table needs to be optimized. A large number of records 
in the instance_faults
table are useless, only leaving the last three records should be on it, others 
can be deleted.

There are any other optimization program?

Steps to reproduce
==
A chronological list of steps which will bring off the
issue you noticed:
* I excute the command: nova --debug list --all-tenants

A list of openstack client commands (with correct argument value)
$ nova --debug list --all-tenants


Expected result
===
I expect to be back soon within 10 seconds

Actual result
=
But the query took more than 40 seconds.

Environment
===
1. version 
Mitaka

2. Which hypervisor did you use?
    Libvirt + KVM

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: all-tenants list slow

** Tags added: all-tenants list slow

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632247

Title:
   The command "nova list --all-tenants" query is slow

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  There are 15 instances. I excute the command: nova --debug list --all-tenants,
  But it took more than 40 seconds. I read the nova api code, it sends a Get 
request
  and read the instance_faults table for detail information. The 
instance_faults table has
   more than tens of thousands of records. Each instance has many records.

 GET /v2/433288e1244046a9bd306658b732dded/servers/detail

  I think instance_faults table needs to be optimized. A large number of 
records in the instance_faults
  table are useless, only leaving the last three records should be on it, 
others can be deleted.

  There are any other optimization program?

  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue you noticed:
  * I excute the command: nova --debug list --all-tenants

  A list of openstack client commands (with correct argument value)
  $ nova --debug list --all-tenants

  
  Expected result
  ===
  I expect to be back soon within 10 seconds

  Actual result
  =
  But the query took more than 40 seconds.

  Environment
  ===
  1. version 
  Mitaka

  2. Which hypervisor did you use?
      Libvirt + KVM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619161] [NEW] flavor-list need return the extra-specs information directly

2016-09-01 Thread Tina Kevin
Public bug reported:

Description
===
Now the command of nova flavor-list --extra-specs can view 
extra-specs information, but I use --debug to see a lot of 
http GET requests for getting the extra_specs information
of each flavor.

With the increase of the flavors, it will get more and more 
GET requests. This will affect the performance of the query.

I think that the query returns a list of flavor, it should 
directly contain extra_specs information.

Steps to reproduce
==
A chronological list of steps which will bring off the
issue you noticed:
* I performed the command:
  $ nova --debug flavor-list --extra-specs

Environment
===
1. Exact version of OpenStack
 Mitaka


Logs & Configs
==
The debug info:
DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/1/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/1/os-extra_specs HTTP/1.1" 200 66
DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:08 GMT connection: 
keep-alive content-type: application/json content-length: 66 
x-compute-request-id: req-15182618-4b28-4c78-87ef-d51f8da309f3 
RESP BODY: {"extra_specs": {}}

DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/2/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/2/os-extra_specs HTTP/1.1" 200 19
DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:09 GMT connection: 
keep-alive content-type: application/json content-length: 19 
x-compute-request-id: req-b519d74e-ed98-48e9-90be-838287f7e407 
RESP BODY: {"extra_specs": {}}

DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/3/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/3/os-extra_specs HTTP/1.1" 200 19
DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:09 GMT connection: 
keep-alive content-type: application/json content-length: 19 
x-compute-request-id: req-ad796e53-e8be-4caa-b182-219a1f3e63ca 
RESP BODY: {"extra_specs": {}}

DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/97/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/97/os-extra_specs HTTP/1.1" 200 39
DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:09 GMT connection: 
keep-alive content-type: application/json content-length: 39 
x-compute-request-id: req-4c8d466e-d013-4549-ae74-8ea4ca578061 
RESP BODY: {"extra_specs": {"hw:numa_nodes": "1"}}

DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/99/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/99/os-extra_specs HTTP/1.1" 200 39
DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:09 GMT connection: 
keep-alive content-type: application/json content-length: 39 
x-compute-request-id: req-9663e309-b421-45dd-9d6a-43f5a5464eab 
RESP BODY: {"extra_specs": {"hw:numa_nodes": "2"}}

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: flavor

** Tags added: flavor

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619161

Title:
  flavor-list need return the extra-specs information directly

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Now the command of nova flavor-list --extra-specs can view 
  extra-specs information, but I use --debug to see a lot of 
  http GET requests for getting the extra_specs information
  of each flavor.

  With the increase of the flavors, it will get more and more 
  GET requests. This will affect the performance of the query.

  I think that the query returns a list of flavor, it should 
  directly contain extra_specs information.

  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue you noticed:
  * I performed the command:
$ nova --debug flavor-list --extra-specs

  Environment
  ===
  1. Exact 

[Yahoo-eng-team] [Bug 1608467] [NEW] Hypervisor show not includes ram_allocation_ratio

2016-08-01 Thread Tina Kevin
Public bug reported:

Description
===
Now the nova hypervisor-show does not contain ram_allocation_ratio,
cpu_allocation_ratio and disk_allocation_ratio.

I think this is unreasonable,the user sets the allocation ratio 
by using the configuration, but they can not see through hypervisor-show.

Now nova hypervisor-show contains the following fileds:
  - cpu_info: cpu_info
  - state: hypervisor_state
  - status: hypervisor_status
  - current_workload: current_workload
  - disk_available_least: disk_available_least
  - host_ip: host_ip
  - free_disk_gb: hypervisor_free_disk_gb
  - free_ram_mb: free_ram_mb
  - hypervisor_hostname: hypervisor_hostname
  - hypervisor_type: hypervisor_type_body
  - hypervisor_version: hypervisor_version
  - id: hypervisor_id_body
  - local_gb: local_gb
  - local_gb_used: local_gb_used
  - memory_mb: memory_mb
  - memory_mb_used: memory_mb_used
  - running_vms: running_vms
  - service: hypervisor_service
  - service.host: host_name_body
  - service.id: service_id_body
  - service.disable_reason: service_disable_reason
  - vcpus: hypervisor_vcpus
  - vcpus_used: hypervisor_vcpus_used

** Affects: nova
 Importance: Undecided
 Assignee: Tina Kevin (song-ruixia)
 Status: New


** Tags: allocation hypervisor-show ratio

** Tags added: allocation hypervisor-show ratio

** Changed in: nova
 Assignee: (unassigned) => Tina Kevin (song-ruixia)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1608467

Title:
  Hypervisor show not includes ram_allocation_ratio

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Now the nova hypervisor-show does not contain ram_allocation_ratio,
  cpu_allocation_ratio and disk_allocation_ratio.

  I think this is unreasonable,the user sets the allocation ratio 
  by using the configuration, but they can not see through hypervisor-show.

  Now nova hypervisor-show contains the following fileds:
- cpu_info: cpu_info
- state: hypervisor_state
- status: hypervisor_status
- current_workload: current_workload
- disk_available_least: disk_available_least
- host_ip: host_ip
- free_disk_gb: hypervisor_free_disk_gb
- free_ram_mb: free_ram_mb
- hypervisor_hostname: hypervisor_hostname
- hypervisor_type: hypervisor_type_body
- hypervisor_version: hypervisor_version
- id: hypervisor_id_body
- local_gb: local_gb
- local_gb_used: local_gb_used
- memory_mb: memory_mb
- memory_mb_used: memory_mb_used
- running_vms: running_vms
- service: hypervisor_service
- service.host: host_name_body
- service.id: service_id_body
- service.disable_reason: service_disable_reason
- vcpus: hypervisor_vcpus
- vcpus_used: hypervisor_vcpus_used

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1608467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607996] [NEW] Live migraion is not update numa hugepages info in xml

2016-07-29 Thread Tina Kevin
Public bug reported:

Description
===
Live migration is not update instance numa hugepages info in xml.
if the numa hugepages info of source host is different from the
numa hugepages info of destation host, then instance in destation
host can not start normally, result in the live-migration is failed.

Steps to reproduce
==
A chronological list of steps which will bring off the
issue:
* There are two compute nodes(host1 and host2).
  The two hosts have same numa topolopy and all have two numa nodes,
  each numa node has eight cpus.

* I boot two instances(A and B) to the compute nodes, instance A
  is located on host1 and instance B is located on host2.
  The two instances are all dedicated cpu_policy and use hugepages.
  Each instance has eight cpus. Instance A is located on the numa node1
  of host1 and instance B is located on the numa node1 of host2.

* Then I live migrate the instance A, the scheduler selects the numa
node2  of host2, but because of the numa hugepages info of xml is not
updated, the instance in destation host starts error.

Expected result
===
The live-migration of the instance is success.

Actual result
=
The live-migration of the instance is failed.
The reason is that the numa hugepages info of xml is not updated.

Environment
===
1. Exact version of OpenStack
 Mitaka

2. Which hypervisor did you use?
 Libvirt + KVM

3. Which networking type did you use?
 Neutron with OpenVSwitch

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hugepages live-migration numa

** Tags added: hugepages live-migration numa

** Description changed:

  Description
  ===
  Live migration is not update instance numa hugepages info in xml.
- if the numa hugepages info of source host is different from the 
+ if the numa hugepages info of source host is different from the
  numa hugepages info of destation host, then instance in destation
  host can not start normally, result in the live-migration is failed.
- 
  
  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue:
- * I boot an instance with dedicated cpu_policy and hugepages
- * then I live migrate the instance
- * then the live-migration action is failed
+ * There are two compute nodes(host1 and host2).
+   The two hosts have same numa topolopy and all have two numa nodes,
+   each numa node has eight cpus.
  
- * There are two compute nodes(host1 and host2). 
-   The two hosts have same numa topolopy and all have two numa nodes, 
-   each numa node has eight cpus.
+ * I boot two instances(A and B) to the compute nodes, instance A
+   is located on host1 and instance B is located on host2.
+   The two instances are all dedicated cpu_policy and use hugepages.
+   Each instance has eight cpus. Instance A is located on the numa node1
+   of host1 and instance B is located on the numa node1 of host2.
  
- * I boot two instances(A and B) to the compute nodes, instance A 
-   is located on host1 and instance B is located on host2.
-   The two instances are all dedicated cpu_policy and use hugepages. 
-   Each instance has eight cpus. Instance A is located on the numa node1 
-   of host1 and instance B is located on the numa node1 of host2.
- 
- * Then I live migrate the instance A, the scheduler selects the numa node2 of 
host2, 
-   but because of the numa hugepages info of xml is not updated, the instance 
in destation
-   host starts error.
+ * Then I live migrate the instance A, the scheduler selects the numa node2 of 
host2,
+   but because of the numa hugepages info of xml is not updated, the instance 
in destation
+   host starts error.
  
  Expected result
  ===
  The live-migration of the instance is success.
  
  Actual result
  =
  The live-migration of the instance is failed.
  The reason is that the numa hugepages info of xml is not updated.
  
  Environment
  ===
- 1. Exact version of OpenStack 
-  Mitaka
+ 1. Exact version of OpenStack
+  Mitaka
  
  2. Which hypervisor did you use?
-  Libvirt + KVM
+  Libvirt + KVM
  
  3. Which networking type did you use?
-  Neutron with OpenVSwitch
+  Neutron with OpenVSwitch

** Description changed:

  Description
  ===
  Live migration is not update instance numa hugepages info in xml.
  if the numa hugepages info of source host is different from the
  numa hugepages info of destation host, then instance in destation
  host can not start normally, result in the live-migration is failed.
  
  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue:
  * There are two compute nodes(host1 and host2).
    The two hosts have same numa topolopy and all have two numa nodes,
    each numa node has eight cpus.
  
  * I boot two instances(A and B) to the compute nodes, instance A
    is located on host1 and instance B is located on host2.
    The two instances are all dedicated 

[Yahoo-eng-team] [Bug 1595083] Re: Instance with powering-off task-state stop failure when the nova-compute restarts

2016-06-23 Thread Tina Kevin
** Changed in: nova
 Assignee: Tina Kevin (song-ruixia) => (unassigned)

** Changed in: nova
Milestone: newton-3 => None

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595083

Title:
  Instance with powering-off task-state stop failure when the nova-
  compute restarts

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  There is an active instance and stop the instance. When the task_state of the
  instance is powering-off, then I restart the nova-compute services, the 
_init_instance
  function will stop the instance again. The expected result is stop instance 
success and
  instance is SHUTOFF, but now stop raise error.
  code:
  if instance.task_state == task_states.POWERING_OFF:
    try:
       LOG.debug("Instance in transitional state %s at start-up "
   "retrying stop request",
   instance.task_state, instance=instance)
  self.stop_instance(context, instance)
    except Exception:
  # we don't want that an exception blocks the init_host
  msg = _LE('Failed to stop instance')
  LOG.exception(msg, instance=instance)
  return

  The powering-on task-state also has the same issue.

  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue :
  * I boot an instance
     nova boot --flavor 2 --image c61966bd-7969-40af-9f9b-ed282fb25bdf --nic
     net-id=ea2a9eb5-f52e-4822-aa5d-168658e9c383  test_iso8
  * then I stop the instance
     nova stop test_iso8
  * then I restart the nova-compute service when instance in powering-off 
task-state

  Expected result
  ===
  When nova-compute service restart,the instance test_iso8 should be stopped.

  Actual result
  =
  After the nova-compute service restart, the instance test_iso8 is active.
  The nova-compute.log has error information.

  Environment
  ===
  1. Exact version of OpenStack you are running.
     Mitaka
  2. Which hypervisor did you use?
     Libvirt + KVM
  3. Which networking type did you use?
      Neutron with OpenVSwitch

  Logs & Configs
  ==
  [root@slot4 ~(keystone_admin)]# nova boot --flavor 2 --image 
c61966bd-7969-40af-9f9b-ed282fb25bdf --nic 
net-id=ea2a9eb5-f52e-4822-aa5d-168658e9c383  test_iso8
  [root@slot4 ~(keystone_admin)]# nova stop test_iso8
  Request to stop server test_iso8 has been accepted.

  restart nova-compute service

  [root@slot4 ~(keystone_admin)]# nova list
  
+--+---+++-+-+
  | ID   | Name  | Status | Task State | 
Power State | Networks|
  
+--+---+++-+-+
  | 40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9 | test_iso8 | ACTIVE | -  | 
Running | net01=5.5.5.249 |
  
+--+---+++-+-+
  [root@slot4 ~(keystone_admin)]# nova instance-action-list 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9
  
+---+--+-++
  | Action| Request_ID   | Message | 
Start_Time |
  
+---+--+-++
  | create| req-25400b17-6c33-45e5-ae4e-87cba0d0de15 | -   | 
2016-06-22T04:40:43.00 |
  | stop  | req-5a7d26eb-79d3-4e69-86a8-d40c02bc6a00 | -   | 
2016-06-22T07:08:23.00 |
  
+---+--+-++

  nova-compute.log error information:
  2016-06-22 15:08:29.942 28875 ERROR nova.compute.manager 
[req-b6bbfa68-334c-4c31-9f6d-4d5523cebc4d - - - - -] [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9] Failed to stop instance
  2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9] Traceback (most recent call last):
  2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1304, in 
_init_instance
  2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9] self.stop_instance(context, instance)
  2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  2016-06-22 15:08:29.942 28875 TRACE nov

[Yahoo-eng-team] [Bug 1595083] [NEW] Instance with powering-off task-state stop failure when the nova-compute restarts

2016-06-22 Thread Tina Kevin
8:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 163, in wrapper
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9] result = fn(cls, context, *args, 
**kwargs)
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]   File 
"/usr/lib/python2.7/site-packages/nova/objects/instance_action.py", line 170, 
in event_start
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9] db_event = 
db.action_event_start(context, values)
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]   File 
"/usr/lib/python2.7/site-packages/nova/db/api.py", line 1858, in 
action_event_start
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9] return 
IMPL.action_event_start(context, values)
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 5748, in 
action_event_start
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9] instance_uuid=values['instance_uuid'])
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9]
2016-06-22 15:08:29.942 28875 TRACE nova.compute.manager [instance: 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9] InstanceActionNotFound: Action for 
request_id req-e8ef400f-ca1b-4252-a2aa-9d749cd57ae4 on instance 
40d78f1d-38fc-4bfb-8e7e-a3dfe60398d9 not found

** Affects: nova
 Importance: Undecided
 Assignee: Tina Kevin (song-ruixia)
 Status: New


** Tags: nova-compute powering-off restart service stop

** Description changed:

  Description
  ===
  There is an active instance and stop the instance. When the task_state of the
- instance is powering-off, then I restart the nova-compute services, the 
_init_instance 
- function will stop the instance again. The expected result is stop instance 
success and 
+ instance is powering-off, then I restart the nova-compute services, the 
_init_instance
+ function will stop the instance again. The expected result is stop instance 
success and
  instance is SHUTOFF, but now stop raise error.
  code:
  if instance.task_state == task_states.POWERING_OFF:
  try:
  LOG.debug("Instance in transitional state %s at start-up "
"retrying stop request",
instance.task_state, instance=instance)
  self.stop_instance(context, instance)
  except Exception:
  # we don't want that an exception blocks the init_host
  msg = _LE('Failed to stop instance')
  LOG.exception(msg, instance=instance)
  return
  
  The powering-on task-state also has the same issue.
  
  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue :
- * I boot an instance 
-nova boot --flavor 2 --image c61966bd-7969-40af-9f9b-ed282fb25bdf --nic 
-net-id=ea2a9eb5-f52e-4822-aa5d-168658e9c383  test_iso8
+ * I boot an instance
+    nova boot --flavor 2 --image c61966bd-7969-40af-9f9b-ed282fb25bdf --nic
+    net-id=ea2a9eb5-f52e-4822-aa5d-168658e9c383  test_iso8
  * then I stop the instance
-nova stop test_iso8
+    nova stop test_iso8
  * then I restart the nova-compute service when instance in powering-off 
task-state
  
  Expected result
  ===
  When nova-compute service restart,the instance test_iso8 should be stopped.
  
  Actual result
  =
  After the nova-compute service restart, the instance test_iso8 is active.
  The nova-compute.log has error information.
  
  Environment
  ===
- 1. Exact version of OpenStack you are running. 
-Mitaka
+ 1. Exact version of OpenStack you are running.
+    Mitaka
  2. Which hypervisor did you use?
     Libvirt + KVM
  3. Which networking type did you use?
      Neutron with OpenVSwitch
  
  Logs & Configs
  ==
  [root@slot4 ~(keystone_admin)]# nova boot --flavor 2 --image 
c61966bd-7969-40af-9f9b-ed282fb25bdf --

[Yahoo-eng-team] [Bug 1585986] [NEW] Add CLI command about region

2016-05-26 Thread Tina Kevin
Public bug reported:

Now we have no commands that are related to region the current version.
I think we needs to create a set of commands for administrator. -
"region-create/delete/list/get/update".

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: command region

** Tags added: region

** Tags added: command

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1585986

Title:
  Add CLI command about region

Status in OpenStack Identity (keystone):
  New

Bug description:
  Now we have no commands that are related to region the current
  version. I think we needs to create a set of commands for
  administrator. - "region-create/delete/list/get/update".

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1585986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp