[Yahoo-eng-team] [Bug 1769817] [NEW] Ironic serial console doesn't get disabled when instance is deleted

2018-05-07 Thread Hironori Shiina
Public bug reported:

Description
===
Ironic serial console doesn't get disabled when instance is deleted.

I filed the bug in Nova because it seems better to request disabling the
console from virt driver at destoying an instance.


Steps to reproduce
==

* Enroll an ironic node with socat console interface.
* Create an instance in nova on the ironic node.
* Connect a serial console from Horizon.
* Delete the instance from an terminal.

Expected result
===
The console gets closed immediately.

Actual result
=
The console is displayed after the instance is deleted and node cleaning starts 
in ironic.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

Devstack with master

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

ironic

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1769817

Title:
  Ironic serial console doesn't get disabled when instance is deleted

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Ironic serial console doesn't get disabled when instance is deleted.

  I filed the bug in Nova because it seems better to request disabling
  the console from virt driver at destoying an instance.

  
  Steps to reproduce
  ==

  * Enroll an ironic node with socat console interface.
  * Create an instance in nova on the ironic node.
  * Connect a serial console from Horizon.
  * Delete the instance from an terminal.

  Expected result
  ===
  The console gets closed immediately.

  Actual result
  =
  The console is displayed after the instance is deleted and node cleaning 
starts in ironic.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

  Devstack with master

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

  ironic

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1769817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1765334] [NEW] Ironic resource class may not be put into inventory in Pike

2018-04-19 Thread Hironori Shiina
Public bug reported:

Description
===
An instance may not be created with a flavor which using a resource class for a 
bare metal node when a resource class is set to a bare metal node in ironic 
with 'openstack baremetal node set' command after the node is once enrolled.

It seems that this issue was fixed in the Queens with this patch[1].

[1] https://review.openstack.org/#/c/518294/

Steps to reproduce
==

1. Enroll a node in ironic without a resource class
  $ openstack baremetal node create mynode

2. Set a resouce class to the node later
  $ openstack baremetal node set mynode --resource-class baremetal

3. Associate a flavor with the resource class
  $ openstack flavor set baremetal --property resources:CUSTOM_BAREMETAL=1

4. Create an instance with the flavor
  $ openstack server create myinstance --flavor baremetal

Expected result
===
Succeeds in creating an instance in a node with the specified resource class.

Actual result
=
No valid host was found.

A custom resource class was not registered into the resource provider.
$ curl -sH "X-Auth-Token: $token" -X GET 
$url/resource_providers/$uuid/inventories | python -m json.tool
{
"inventories": {
"DISK_GB": {
"allocation_ratio": 1.0,
"max_unit": 4080,
"min_unit": 1,
"reserved": 0,
"step_size": 1,
"total": 4080
},
"MEMORY_MB": {
"allocation_ratio": 1.0,
"max_unit": 196608,
"min_unit": 1,
"reserved": 0,
"step_size": 1,
"total": 196608
},
"VCPU": {
"allocation_ratio": 1.0,
"max_unit": 28,
"min_unit": 1,
"reserved": 0,
"step_size": 1,
"total": 28
}
},
"resource_provider_generation": 1
}

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

Pike

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)

Ironic

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1765334

Title:
  Ironic resource class may not be put into inventory in Pike

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  An instance may not be created with a flavor which using a resource class for 
a bare metal node when a resource class is set to a bare metal node in ironic 
with 'openstack baremetal node set' command after the node is once enrolled.

  It seems that this issue was fixed in the Queens with this patch[1].

  [1] https://review.openstack.org/#/c/518294/

  Steps to reproduce
  ==

  1. Enroll a node in ironic without a resource class
$ openstack baremetal node create mynode

  2. Set a resouce class to the node later
$ openstack baremetal node set mynode --resource-class baremetal

  3. Associate a flavor with the resource class
$ openstack flavor set baremetal --property resources:CUSTOM_BAREMETAL=1

  4. Create an instance with the flavor
$ openstack server create myinstance --flavor baremetal

  Expected result
  ===
  Succeeds in creating an instance in a node with the specified resource class.

  Actual result
  =
  No valid host was found.

  A custom resource class was not registered into the resource provider.
  $ curl -sH "X-Auth-Token: $token" -X GET 
$url/resource_providers/$uuid/inventories | python -m json.tool
  {
  "inventories": {
  "DISK_GB": {
  "allocation_ratio": 1.0,
  "max_unit": 4080,
  "min_unit": 1,
  "reserved": 0,
  "step_size": 1,
  "total": 4080
  },
  "MEMORY_MB": {
  "allocation_ratio": 1.0,
  "max_unit": 196608,
  "min_unit": 1,
  "reserved": 0,
  "step_size": 1,
  "total": 196608
  },
  "VCPU": {
  "allocation_ratio": 1.0,
  "max_unit": 28,
  "min_unit": 1,
  "reserved": 0,
  "step_size": 1,
  "total": 28
  }
  },
  "resource_provider_generation": 1
  }

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

  Pike

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)

  Ironic

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1765334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post 

[Yahoo-eng-team] [Bug 1751472] [NEW] InventoryInUse exception is periodically logged as ERROR

2018-02-24 Thread Hironori Shiina
Public bug reported:

After a bare metal instance creation is started, an InventoryInUse
exception is logged as ERROR with stack trace in the log of n-cpu.

Ironic virt driver returns an empty inventory for a node which is
allocated[1]. Due to this, the resource tracker tried to delete this
inventory, then it causes a conflict error because the resource provider
for the ironic node is allocated. A waning message used to be logged for
this conflict error[2]. After the recent change[3], an InventoryInUse
exception is raised[4]. Then, this exception is logged as ERROR[5].

[1] 
https://github.com/openstack/nova/blob/26c593c91f008caab92ed52156dfe2d898955d3f/nova/virt/ironic/driver.py#L780
[2] 
https://github.com/openstack/nova/commit/26c593c91f008caab92ed52156dfe2d898955d3f#diff-94f87e728df6465becce5241f3da53c8L994
[3] 
https://github.com/openstack/nova/commit/26c593c91f008caab92ed52156dfe2d898955d3f#diff-94f87e728df6465becce5241f3da53c8
[4] 
https://github.com/openstack/nova/blob/26c593c91f008caab92ed52156dfe2d898955d3f/nova/scheduler/client/report.py#L878
[5] 
https://github.com/openstack/nova/blob/26c593c91f008caab92ed52156dfe2d898955d3f/nova/compute/manager.py#L7244

-
The following log is from an ironic job[6].

[6] http://logs.openstack.org/19/546919/2/check/ironic-tempest-dsvm-ipa-
partition-pxe_ipmitool-tinyipa-
python3/2737ab0/logs/screen-n-cpu.txt.gz?level=DEBUG#_Feb_22_11_13_08_848696

Feb 22 11:13:08.848696 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG nova.virt.ironic.driver [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 
service nova] Node 42ae69bd-c860-4eaa-8a36-fdee78425714 is not ready for a 
deployment, reporting an empty inventory for it. Node's provision state is 
deploying, power state is power off and maintenance is False. {{(pid=14029) 
get_inventory /opt/stack/new/nova/nova/virt/ironic/driver.py:752}}
Feb 22 11:13:08.956620 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG nova.scheduler.client.report [None 
req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] Refreshing aggregate 
associations for resource provider fdd77c1d-5b1f-4a9a-b168-0fa93362b95d, 
aggregates: None {{(pid=14029) _refresh_associations 
/opt/stack/new/nova/nova/scheduler/client/report.py:773}}
Feb 22 11:13:08.977097 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG nova.virt.ironic.driver [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 
service nova] The flavor extra_specs for Ironic instance 
803d864c-542e-4bb4-a89a-38da01cb8409 have been updated for custom resource 
class 'baremetal'. {{(pid=14029) _pike_flavor_migration 
/opt/stack/new/nova/nova/virt/ironic/driver.py:545}}
Feb 22 11:13:08.994233 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG nova.scheduler.client.report [None 
req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] Refreshing trait 
associations for resource provider fdd77c1d-5b1f-4a9a-b168-0fa93362b95d, 
traits: None {{(pid=14029) _refresh_associations 
/opt/stack/new/nova/nova/scheduler/client/report.py:784}}
Feb 22 11:13:09.058940 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
INFO nova.scheduler.client.report [None 
req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] 
[req-55086c0b-9068-49fb-ae94-cd870ab96cab] Inventory update conflict for 
fdd77c1d-5b1f-4a9a-b168-0fa93362b95d with generation ID 2
Feb 22 11:13:09.059437 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG oslo_concurrency.lockutils [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 
service nova] Lock "compute_resources" released by 
"nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: 
held 0.249s {{(pid=14029) inner 
/usr/local/lib/python3.5/dist-packages/oslo_concurrency/lockutils.py:285}}
Feb 22 11:13:09.062075 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
ERROR nova.compute.manager [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 
service nova] Error updating resources for node 
42ae69bd-c860-4eaa-8a36-fdee78425714.: nova.exception.InventoryInUse: Inventory 
for ''CUSTOM_BAREMETAL'' on resource provider 
'fdd77c1d-5b1f-4a9a-b168-0fa93362b95d' in use.
Feb 22 11:13:09.062229 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
ERROR nova.compute.manager Traceback (most recent call last):
Feb 22 11:13:09.062343 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 7245, in 
update_available_resource_for_node
Feb 22 11:13:09.062454 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
ERROR nova.compute.manager rt.update_available_resource(context, nodename)
Feb 22 11:13:09.062562 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 680, in 
update_available_resource
Feb 22 11:13:09.062669 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
ERROR nova.compute.manager self._update_available_resource(context, 

[Yahoo-eng-team] [Bug 1749629] [NEW] VIFs are not detached from ironic node when unprovision fails

2018-02-14 Thread Hironori Shiina
Public bug reported:

Description
===
When deleting a bare metal instance, even if ironic fails to unprovision a 
node, network resources are deallocated. Then VIFs are removed though the VIFs 
are not detached from the ironic node. The instance gets in ERROR state and can 
be deleted by requesting deletion again without any call to ironic. In ironic, 
the node gets in 'error' provision state after the unprovision failure. Though 
node can be recovered by undeploying it with ironic API, the VIF UUIDs still 
remain in the node. This causes an error when the node is deployed again since 
ironic tries to bind the node to the deleted VIFs. It would be better to detach 
VIFs even if unprovisioning fails.

Steps to reproduce
==
* create an instance
* delete an instance, then it failed due to ironic bug
ironic node gets in 'error' provision state
* recover the ironic node with:
openstack baremetal node undeploy 
* create an instance again, then the previous node is chosen

Expected result
===
After the execution of the steps above, what should have
happened if the issue wasn't present?

Actual result
=
What happened instead of the expected result?
How did the issue look like?

Environment
===
Deploying Ironic with DevStack[1] with master branch.

[1] https://docs.openstack.org/ironic/latest/contributor/dev-
quickstart.html#deploying-ironic-with-devstack

Logs & Configs
==

n-cpu log when undeploy failed
---
Feb 14 21:24:51 compute nova-compute[17722]: INFO nova.compute.manager [None 
req-420e3a3a-3af4-4662-877e-d19fd24ea036 service nova] [instance: 
651f3a00-6074-4346-a229-97e8874eed36] Successfully reverted task state from 
deleting on failure for instance.
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server 
[None req-420e3a3a-3af4-4662-877e-d19fd24ea036 service nova] Exception during 
message handling: NovaException: Error destroying the instance on node 
8a2eea80-d53a-419e-a56c-9981f248cf91. Provision state still 'deleting'.
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server 
Traceback (most recent call last):
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 163, in _process_incoming
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server
 res = self.dispatcher.dispatch(message)
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 220, in dispatch
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server
 return self._do_dispatch(endpoint, method, ctxt, args)
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 190, in _do_dispatch 
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server
 result = func(ctxt, **new_args)
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/opt/stack/nova/nova/exception_wrapper.py", line 76, in wrapped
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server
 function_name, call_dict, binary)
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, 
in __exit__
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server
 self.force_reraise()
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server
 six.reraise(self.type_, self.value, self.tb)
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/opt/stack/nova/nova/exception_wrapper.py", line 67, in wrapped
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server
 return f(self, context, *args, **kw)
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/opt/stack/nova/nova/compute/manager.py", line 186, in decorated_function
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server
 "Error: %s", e, instance=instance)
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, 
in __exit__
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server
 self.force_reraise()
Feb 14 21:24:52 compute nova-compute[17722]: ERROR oslo_messaging.rpc.server   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
Feb 14 21:24:52 

[Yahoo-eng-team] [Bug 1715646] [NEW] Compute host for ironic is not mapped to a cell if no ironic node is created

2017-09-07 Thread Hironori Shiina
Public bug reported:

'nova-manage cell_v2 discover_hosts' command discovers a host based on compute 
node records.
When virt ironic driver is used, a compute node record is created after a node 
is created in ironic. If a nova compute host is set up before a node is created 
in ironic, discover_hosts command cannot discover the host. The command should 
be run after a node is created in ironic.

I'm not sure if this is a bug.
If not, it would be better to add a note to ironic install guide.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715646

Title:
  Compute host for ironic is not mapped to a cell if no ironic node is
  created

Status in OpenStack Compute (nova):
  New

Bug description:
  'nova-manage cell_v2 discover_hosts' command discovers a host based on 
compute node records.
  When virt ironic driver is used, a compute node record is created after a 
node is created in ironic. If a nova compute host is set up before a node is 
created in ironic, discover_hosts command cannot discover the host. The command 
should be run after a node is created in ironic.

  I'm not sure if this is a bug.
  If not, it would be better to add a note to ironic install guide.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714248] [NEW] Compute node HA for ironic doesn't work due to the name duplication of Resource Provider

2017-08-31 Thread Hironori Shiina
Public bug reported:

Description
===
In an environment where there are multiple compute nodes with ironic driver, 
when a compute node goes down, another compute node cannot take over ironic 
nodes.

Steps to reproduce
==
1. Start multiple compute nodes with ironic driver.
2. Register one node to ironic.
2. Stop a compute node which manages the ironic node.
3. Create an instance.

Expected result
===
The instance creation is failed.

Actual result
=
The instance is created.

Environment
===
1. Exact version of OpenStack you are running.
openstack-nova-scheduler-15.0.6-2.el7.noarch
openstack-nova-console-15.0.6-2.el7.noarch
python2-novaclient-7.1.0-1.el7.noarch
openstack-nova-common-15.0.6-2.el7.noarch
openstack-nova-serialproxy-15.0.6-2.el7.noarch
openstack-nova-placement-api-15.0.6-2.el7.noarch
python-nova-15.0.6-2.el7.noarch
openstack-nova-novncproxy-15.0.6-2.el7.noarch
openstack-nova-api-15.0.6-2.el7.noarch
openstack-nova-conductor-15.0.6-2.el7.noarch

2. Which hypervisor did you use?
ironic

Details
===
When a nova-compute goes down, another nova-compute will take over ironic nodes 
managed by the failed nova-compute by re-balancing a hash-ring. Then the active 
nova-compute tries creating a
new resource provider with a new ComputeNode object UUID and the hypervisor 
name (ironic node name)[1][2][3]. This creation fails with a conflict(409) 
since there is a resource provider with the same name created by the failed 
nova-compute.

When a new instance is requested, the scheduler gets only an old
resource provider for the ironic node[4]. Then, the ironic node is not
selected:

WARNING nova.scheduler.filters.compute_filter [req-
a37d68b5-7ab1-4254-8698-502304607a90 7b55e61a07304f9cab1544260dcd3e41
e21242f450d948d7af2650ac9365ee36 - - -] (compute02, 8904aeeb-a35b-4ba3
-848a-73269fdde4d3) ram: 4096MB disk: 849920MB io_ops: 0 instances: 0
has not been heard from in a while

[1] 
https://github.com/openstack/nova/blob/stable/ocata/nova/compute/resource_tracker.py#L464
[2] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L630
[3] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L410
[4] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/filter_scheduler.py#L183

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714248

Title:
  Compute node HA for ironic doesn't work due to the name duplication of
  Resource Provider

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  In an environment where there are multiple compute nodes with ironic driver, 
  when a compute node goes down, another compute node cannot take over ironic 
nodes.

  Steps to reproduce
  ==
  1. Start multiple compute nodes with ironic driver.
  2. Register one node to ironic.
  2. Stop a compute node which manages the ironic node.
  3. Create an instance.

  Expected result
  ===
  The instance creation is failed.

  Actual result
  =
  The instance is created.

  Environment
  ===
  1. Exact version of OpenStack you are running.
  openstack-nova-scheduler-15.0.6-2.el7.noarch
  openstack-nova-console-15.0.6-2.el7.noarch
  python2-novaclient-7.1.0-1.el7.noarch
  openstack-nova-common-15.0.6-2.el7.noarch
  openstack-nova-serialproxy-15.0.6-2.el7.noarch
  openstack-nova-placement-api-15.0.6-2.el7.noarch
  python-nova-15.0.6-2.el7.noarch
  openstack-nova-novncproxy-15.0.6-2.el7.noarch
  openstack-nova-api-15.0.6-2.el7.noarch
  openstack-nova-conductor-15.0.6-2.el7.noarch

  2. Which hypervisor did you use?
  ironic

  Details
  ===
  When a nova-compute goes down, another nova-compute will take over ironic 
nodes managed by the failed nova-compute by re-balancing a hash-ring. Then the 
active nova-compute tries creating a
  new resource provider with a new ComputeNode object UUID and the hypervisor 
name (ironic node name)[1][2][3]. This creation fails with a conflict(409) 
since there is a resource provider with the same name created by the failed 
nova-compute.

  When a new instance is requested, the scheduler gets only an old
  resource provider for the ironic node[4]. Then, the ironic node is not
  selected:

  WARNING nova.scheduler.filters.compute_filter [req-
  a37d68b5-7ab1-4254-8698-502304607a90 7b55e61a07304f9cab1544260dcd3e41
  e21242f450d948d7af2650ac9365ee36 - - -] (compute02, 8904aeeb-a35b-4ba3
  -848a-73269fdde4d3) ram: 4096MB disk: 849920MB io_ops: 0 instances: 0
  has not been heard from in a while

  [1] 
https://github.com/openstack/nova/blob/stable/ocata/nova/compute/resource_tracker.py#L464
  [2] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L630
  [3] 

[Yahoo-eng-team] [Bug 1713641] [NEW] Volume is not unreserved after schduling is failed

2017-08-29 Thread Hironori Shiina
Public bug reported:

For creating instance with booting from a volume, the volume is reserved
before scheduling[1]. When the scheduling is failed due to lack of valid
hosts, the instance gets in ERROR state and the volume is kept in
'attaching' state. The volume reservation is not removed even if the
instance is deleted.

[1]
https://github.com/openstack/nova/blob/5d3a11b9c9a6a5aecd46ad7ecc635215184d930e/nova/compute/api.py#L3577

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1713641

Title:
  Volume is not unreserved after schduling is failed

Status in OpenStack Compute (nova):
  New

Bug description:
  For creating instance with booting from a volume, the volume is
  reserved before scheduling[1]. When the scheduling is failed due to
  lack of valid hosts, the instance gets in ERROR state and the volume
  is kept in 'attaching' state. The volume reservation is not removed
  even if the instance is deleted.

  [1]
  
https://github.com/openstack/nova/blob/5d3a11b9c9a6a5aecd46ad7ecc635215184d930e/nova/compute/api.py#L3577

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1713641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1695744] [NEW] Ironic driver doesn't check an error of power state change

2017-06-04 Thread Hironori Shiina
Public bug reported:

When starting an instance with Ironic virt driver, even if Ironic fails
to turn on a server, an instance vm state is set to ACTIVE.

The cause is that the driver doesn't check a 'last error' of the node
after the power action finishes. The error is set if the power action is
failed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1695744

Title:
  Ironic driver doesn't check an error of power state change

Status in OpenStack Compute (nova):
  New

Bug description:
  When starting an instance with Ironic virt driver, even if Ironic
  fails to turn on a server, an instance vm state is set to ACTIVE.

  The cause is that the driver doesn't check a 'last error' of the node
  after the power action finishes. The error is set if the power action
  is failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1695744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547362] [NEW] A driver API for triggering crash dump should abstract an implementation

2016-02-18 Thread Hironori Shiina
Public bug reported:

We added a new API for triggering crash dump in Mitaka.
At first, "inject_nmi" was proposed as a new API name.
In discussion of the spec[1], we decided to abstract how to trigger crash dump
because it depends on hypervisors.

Since the function was partially implemented in Liberty, virt driver has a API 
named "inject_nmi".
We should change the API name for abstracting an implementation based on the 
spec approved in Mitaka.

[1] https://review.openstack.org/#/c/229255/

** Affects: nova
 Importance: Undecided
     Assignee: Hironori Shiina (shiina-hironori)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Hironori Shiina (shiina-hironori)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1547362

Title:
  A driver API for triggering crash dump should abstract an
  implementation

Status in OpenStack Compute (nova):
  New

Bug description:
  We added a new API for triggering crash dump in Mitaka.
  At first, "inject_nmi" was proposed as a new API name.
  In discussion of the spec[1], we decided to abstract how to trigger crash dump
  because it depends on hypervisors.

  Since the function was partially implemented in Liberty, virt driver has a 
API named "inject_nmi".
  We should change the API name for abstracting an implementation based on the 
spec approved in Mitaka.

  [1] https://review.openstack.org/#/c/229255/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1547362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400945] [NEW] User quota could be set greater than tenant quota

2014-12-09 Thread Hironori Shiina
Public bug reported:

Reproduce:
1.Create new tenant and user.
$ keystone tenant-create --name quota --description Quota test
$ keystone user-create --name quotauser --pass quotauser
$ keystone user-role-add --user quotauser --tenant quota --role _member_

2.Set tenant quota 
$nova quota-update $tenant_id --instances 20

3.Set user quota to a value greater than tenant quota
$nova quota-update $tenant_id --user $user_id --instances 40

Result:
The user quota update succeeded. Then the user quota is greater than the tenant 
quota.

$ nova quota-show --tenant $tenant_id
+-+---+
| Quota   | Limit |
+-+---+
| instances   | 20|

$ nova quota-show --tenant $tenant_id --user $user_id
+-+---+
| Quota   | Limit |
+-+---+
| instances   | 40|

** Affects: nova
 Importance: Undecided
 Assignee: Hironori Shiina (shiina-hironori)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Hironori Shiina (shiina-hironori)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400945

Title:
  User quota could be set greater than tenant quota

Status in OpenStack Compute (Nova):
  New

Bug description:
  Reproduce:
  1.Create new tenant and user.
  $ keystone tenant-create --name quota --description Quota test
  $ keystone user-create --name quotauser --pass quotauser
  $ keystone user-role-add --user quotauser --tenant quota --role _member_

  2.Set tenant quota 
  $nova quota-update $tenant_id --instances 20

  3.Set user quota to a value greater than tenant quota
  $nova quota-update $tenant_id --user $user_id --instances 40

  Result:
  The user quota update succeeded. Then the user quota is greater than the 
tenant quota.

  $ nova quota-show --tenant $tenant_id
  +-+---+
  | Quota   | Limit |
  +-+---+
  | instances   | 20|

  $ nova quota-show --tenant $tenant_id --user $user_id
  +-+---+
  | Quota   | Limit |
  +-+---+
  | instances   | 40|

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393329] [NEW] Trailing whitespaces pass IP address validation

2014-11-17 Thread Hironori Shiina
Public bug reported:

Trailing whitespaces of IP address are not detected in the validation.
These whitespaces cause some troubles later.

In the following case, '\r' in the IP address is not detected.
# neutron subnet-show ----
+--+--+
| Field   | Value   
 |
+--+--+
| allocation_pools | {start: 10.1.1.240\r, end: 10.1.1.250} |
 :

** Affects: neutron
 Importance: Undecided
 Assignee: Hironori Shiina (shiina-hironori)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Hironori Shiina (shiina-hironori)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393329

Title:
  Trailing whitespaces pass IP address validation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Trailing whitespaces of IP address are not detected in the validation.
  These whitespaces cause some troubles later.

  In the following case, '\r' in the IP address is not detected.
  # neutron subnet-show ----
  +--+--+
  | Field   | Value 
   |
  +--+--+
  | allocation_pools | {start: 10.1.1.240\r, end: 10.1.1.250} |
   :

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393329/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp