[Yahoo-eng-team] [Bug 1803331] [NEW] Root disk lost when resizing instance from imagebackend to rbd backed flavor

2018-11-14 Thread Logan V
Public bug reported:

We have flavor classes using different nova disk backends that are
separated using host aggregates. For example, we have a flavor named
l1.tiny which is using imagebackend, and s1.small using rbd backend. The
hypervisors configured for imagebackend are added to the host aggregate
where l1.* instances are scheduled, and the rbd hypervisors are in an
aggregate where s1.* instances are scheduled.

When resizing an instance from l1.tiny to s1.small, the instance fails
to resize and enters error state. The root disk is also lost during the
failed resize. The host of the instance is set to one of the s1.*
aggregate HVs, and the imagebackend disk is no longer present on the
original l1.* hypervisor.

The error provided in 'instance show' is:

| fault   | {u'message': u'[errno 2] error
opening image 5a8ab7a3-3e59-442c-a603-2c24652788cb_disk at snapshot
None', u'code': 500, u'details': u'  File "/openstack/venvs/nova-
untagged/local/lib/python2.7/site-packages/nova/compute/manager.py",
line 204, in decorated_function\nreturn function(self, context,
*args, **kwargs)\n  File "/openstack/venvs/nova-
untagged/local/lib/python2.7/site-packages/nova/compute/manager.py",
line 4062, in finish_resize\n
self._set_instance_obj_error_state(context, instance)\n  File
"/openstack/venvs/nova-untagged/local/lib/python2.7/site-
packages/oslo_utils/excutils.py", line 220, in __exit__\n
self.force_reraise()\n  File "/openstack/venvs/nova-
untagged/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line
196, in force_reraise\nsix.reraise(self.type_, self.value,
self.tb)\n  File "/openstack/venvs/nova-untagged/local/lib/python2.7
/site-packages/nova/compute/manager.py", line 4050, in finish_resize\n
disk_info, image_meta)\n  File "/openstack/venvs/nova-
untagged/local/lib/python2.7/site-packages/nova/compute/manager.py",
line 4012, in _finish_resize\nold_instance_type)\n  File
"/openstack/venvs/nova-untagged/local/lib/python2.7/site-
packages/oslo_utils/excutils.py", line 220, in __exit__\n
self.force_reraise()\n  File "/openstack/venvs/nova-
untagged/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line
196, in force_reraise\nsix.reraise(self.type_, self.value,
self.tb)\n  File "/openstack/venvs/nova-untagged/local/lib/python2.7
/site-packages/nova/compute/manager.py", line 4007, in _finish_resize\n
block_device_info, power_on)\n  File "/openstack/venvs/nova-
untagged/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
line 7454, in finish_migration\n
fallback_from_host=migration.source_compute)\n  File "/openstack/venvs
/nova-untagged/local/lib/python2.7/site-
packages/nova/virt/libvirt/driver.py", line 3160, in _create_image\n
fallback_from_host)\n  File "/openstack/venvs/nova-
untagged/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
line 3264, in _create_and_inject_local_root\n
backend.create_snap(libvirt_utils.RESIZE_SNAPSHOT_NAME)\n  File
"/openstack/venvs/nova-untagged/local/lib/python2.7/site-
packages/nova/virt/libvirt/imagebackend.py", line 941, in create_snap\n
return self.driver.create_snap(self.rbd_name, name)\n  File
"/openstack/venvs/nova-untagged/local/lib/python2.7/site-
packages/nova/virt/libvirt/storage/rbd_utils.py", line 392, in
create_snap\nwith RBDVolumeProxy(self, str(volume), pool=pool) as
vol:\n  File "/openstack/venvs/nova-untagged/local/lib/python2.7/site-
packages/nova/virt/libvirt/storage/rbd_utils.py", line 78, in __init__\n
driver._disconnect_from_rados(client, ioctx)\n  File "/openstack/venvs
/nova-untagged/local/lib/python2.7/site-
packages/oslo_utils/excutils.py", line 220, in __exit__\n
self.force_reraise()\n  File "/openstack/venvs/nova-
untagged/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line
196, in force_reraise\nsix.reraise(self.type_, self.value,
self.tb)\n  File "/openstack/venvs/nova-untagged/local/lib/python2.7
/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 74, in
__init__\nread_only=read_only))\n  File "rbd.pyx", line 1392, in
rbd.Image.__init__ (/build/ceph-12.2.2/obj-x86_64-linux-
gnu/src/pybind/rbd/pyrex/rbd.c:13540)\n', u'created':
u'2018-11-14T11:03:11Z'} |


We are currently seeing this behavior on Ocata. I'm not certain if more recent 
nova releases experience this also.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1803331

Title:
  Root disk lost when resizing instance from imagebackend to rbd backed
  flavor

Status in OpenStack Compute (nova):
  New

Bug description:
  We have flavor classes using different nova disk backends that are
  separated using host aggregates. For example, we have a flavor named
  l1.tiny which is using imagebackend, and s1.small using rbd backend.
  The hypervisors configured for imagebackend are added to the ho

[Yahoo-eng-team] [Bug 1751349] [NEW] Keystone auth parameters cannot be configured in [keystone] section

2018-02-23 Thread Logan V
Public bug reported:

I am seeing nova-api attempting to use the keystone public endpoint when
/v2.1/os-quota-sets is called on my Pike deployment. This is not valid
in my environment; the API must use the internal endpoint to reach
keystone. When the public endpoint is used, the connection sits in
SYN_SENT state in netstat until it times out after a minute or two.

Hacking the endpoint_filter at
https://github.com/openstack/nova/blob/d536bec9fc098c9db8d46f39aab30feb0783e428/nova/api/openstack/identity.py#L43-L46
to include interface=internal fixes the issue.

Unless I am mistaken this issue still exists in master:
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/api/openstack/identity.py#L33-L35

Something similar to the [placement] section should be implemented
allowing os_interface to be configured.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1751349

Title:
  Keystone auth parameters cannot be configured in [keystone] section

Status in OpenStack Compute (nova):
  New

Bug description:
  I am seeing nova-api attempting to use the keystone public endpoint
  when /v2.1/os-quota-sets is called on my Pike deployment. This is not
  valid in my environment; the API must use the internal endpoint to
  reach keystone. When the public endpoint is used, the connection sits
  in SYN_SENT state in netstat until it times out after a minute or two.

  Hacking the endpoint_filter at
  
https://github.com/openstack/nova/blob/d536bec9fc098c9db8d46f39aab30feb0783e428/nova/api/openstack/identity.py#L43-L46
  to include interface=internal fixes the issue.

  Unless I am mistaken this issue still exists in master:
  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/api/openstack/identity.py#L33-L35

  Something similar to the [placement] section should be implemented
  allowing os_interface to be configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1751349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1740951] [NEW] Unable to dump policy

2018-01-02 Thread Logan V
Public bug reported:

I'm having issues dumping policy from Keystone in Pike

root@aio1-keystone-container-398c6a0f:~# 
/openstack/venvs/keystone-16.0.6/bin/oslopolicy-policy-generator --namespace 
keystone
WARNING:stevedore.named:Could not load keystone
Traceback (most recent call last):
  File "/openstack/venvs/keystone-16.0.6/bin/oslopolicy-policy-generator", line 
11, in 
sys.exit(generate_policy())
  File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/oslo_policy/generator.py",
 line 233, in generate_policy
_generate_policy(conf.namespace, conf.output_file)
  File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/oslo_policy/generator.py",
 line 178, in _generate_policy
enforcer = _get_enforcer(namespace)
  File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/oslo_policy/generator.py",
 line 74, in _get_enforcer
enforcer = mgr[namespace].obj
  File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/stevedore/extension.py",
 line 314, in __getitem__
return self._extensions_by_name[name]
KeyError: 'keystone'

Normally it works like this with Nova:
root@aio1-nova-api-os-compute-container-3589c25e:~# 
/openstack/venvs/nova-16.0.6/bin/oslopolicy-policy-generator --namespace nova
"os_compute_api:os-evacuate": "rule:admin_api"
"os_compute_api:servers:create": "rule:admin_or_owner"
"os_compute_api:os-extended-volumes": "rule:admin_or_owner"
"os_compute_api:servers:create:forced_host": "rule:admin_api"
"os_compute_api:os-aggregates:remove_host": "rule:admin_api"
...

IRC convo regarding this bug:
[04:00:26PM] logan- hello. I'm trying to use oslopolicy-policy-generator to 
dump the base RBAC so it can be combined with my policy overrides and provided 
to horizon. with nova i'm able to dump RBAC using 
"/path/to/nova/venv/bin/oslopolicy-policy-generator --namespace nova", but the 
doing the same with keystone using "keystone" or "identity" as the namespace 
does not work. 
[04:01:39PM] @lbragstad logan-: do you have keystone installed?
[04:01:57PM] @lbragstad let me see if i can recreate
[04:03:30PM] logan- o/ @lbragstad. yep keystone's installed. here's the venv 
and output for the oslopolicy command at the bottom: 
http://paste.openstack.org/raw/636624/
[04:03:53PM] @lbragstad huh - weird
[04:03:56PM] @lbragstad i can recreate
[04:04:48PM] ayoung @lbragstad, logan- I bet it is a dependency issue
[04:05:25PM] ayoung trying to load Keystone fails cuz some other library is 
missing, and I bet  that is pulled in from oslopolicy polgen
[04:07:05PM] ayoung oslo.policy.policies =
[04:07:05PM] ayoung # With the move of default policy in code list_rules 
returns a list of
[04:07:05PM] ayoung # the default defined polices.
[04:07:05PM] ayoung keystone = keystone.common.policies:list_rules
[04:07:12PM] ayoung that is from setup.cfg
[04:07:21PM] ayoung is that what iti is trying to load?
[04:07:36PM] @lbragstad well - it's should be an entrypoint in oslo.policy
[04:07:47PM] @lbragstad keystone is just responsible for exposing the namespace
[04:07:59PM] @lbragstad 
https://github.com/openstack/keystone/blob/master/config-generator/keystone-policy-generator.conf
[04:08:26PM] @lbragstad which is the same as what nova defines
[04:08:28PM] @lbragstad 
https://github.com/openstack/nova/blob/master/etc/nova/nova-policy-generator.conf
[04:09:31PM] ayoung seems like it is not registered
[04:12:16PM] ayoung yep, reproduced it here, too
[04:15:32PM] @lbragstad i think we're missing this entrypoint
[04:15:33PM] @lbragstad 
https://docs.openstack.org/oslo.policy/latest/user/usage.html#merged-file-generation
[04:15:45PM] @lbragstad which just needs something to return the _ENFORCER
[04:15:55PM] @lbragstad so keystone.common.policy:get_enforcer
[04:15:59PM] @lbragstad or something like that
[04:16:24PM] @lbragstad logan-: certainly a bug
[04:16:35PM] @lbragstad logan-: would you be able to open up something in 
launchpad?
[04:16:53PM] @lbragstad we can get a patch up shortly, i think we're missing 
something with how we wire up the entry poionts

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1740951

Title:
  Unable to dump policy

Status in OpenStack Identity (keystone):
  New

Bug description:
  I'm having issues dumping policy from Keystone in Pike

  root@aio1-keystone-container-398c6a0f:~# 
/openstack/venvs/keystone-16.0.6/bin/oslopolicy-policy-generator --namespace 
keystone
  WARNING:stevedore.named:Could not load keystone
  Traceback (most recent call last):
File "/openstack/venvs/keystone-16.0.6/bin/oslopolicy-policy-generator", 
line 11, in 
  sys.exit(generate_policy())
File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/oslo_policy/generator.py",
 line 233, in generate_policy
  _generate_policy(conf.namespace

[Yahoo-eng-team] [Bug 1671921] [NEW] Boot from Volume invalid BDM

2017-03-10 Thread Logan V
Public bug reported:

Booting from volume in the new launch panel causes the following BDM:
{"server": {"name": "test", "imageRef": "", "availability_zone
  ": "us-dfw-1", "key_name": "test", "flavorRef": 
"9340639a-2883-48ca-811f-01519e527648", "block_device_mapping": [{"volume_id": 
"94426a92-5409-428d-8a26-81f94395d9ad", "delete_on_termination": "false", 
"device_name": "sda"}], "OS-DCF:diskConfig": "AUTO", "max_count": 1, 
"min_count": 1, "networks": [{"uuid
  ": "82ac236e-384a-420b-aa2c-3003f44cf782"}], "security_groups": [{"name": 
"4df780bf-b2c5-4c0e-acba-64b9b2f08eaa"}]}}

Nova rejects it with:
{"badRequest": {"message": "Block Device Mapping is Invalid: Boot sequence for 
the instance and image/block device mapping combin
  ation is not valid.", "code": 400}}}

It seems like it is missing a boot_index?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1671921

Title:
  Boot from Volume invalid BDM

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Booting from volume in the new launch panel causes the following BDM:
  {"server": {"name": "test", "imageRef": "", "availability_zone
": "us-dfw-1", "key_name": "test", "flavorRef": 
"9340639a-2883-48ca-811f-01519e527648", "block_device_mapping": [{"volume_id": 
"94426a92-5409-428d-8a26-81f94395d9ad", "delete_on_termination": "false", 
"device_name": "sda"}], "OS-DCF:diskConfig": "AUTO", "max_count": 1, 
"min_count": 1, "networks": [{"uuid
": "82ac236e-384a-420b-aa2c-3003f44cf782"}], "security_groups": [{"name": 
"4df780bf-b2c5-4c0e-acba-64b9b2f08eaa"}]}}

  Nova rejects it with:
  {"badRequest": {"message": "Block Device Mapping is Invalid: Boot sequence 
for the instance and image/block device mapping combin
ation is not valid.", "code": 400}}}

  It seems like it is missing a boot_index?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1671921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671288] [NEW] live_migration_uri to live_migration_scheme SSH settings

2017-03-08 Thread Logan V
Public bug reported:

I saw in the Ocata release notes that live_migration_uri is deprecated,
and there is mention of a new setting called live_migration_scheme.
However, the new config option live_migration_scheme does not appear in
the ocata configuration reference[1].

I am also curious how the live_migration_scheme setting could be used to
configure a migration URI similar to
"qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa"
[2] as it seems to only allow setting the scheme to qemu+ssh, but may
not offer the ability to configure the ssh settings like the key
location and verification.

[1] 
https://docs.openstack.org/ocata/config-reference/compute/config-options.html
[2] 
https://github.com/openstack/openstack-ansible-os_nova/commit/7c9a64b2ed972a605ef51b8f8af29ab2453e4b1c#diff-ca98b38be47a1d270f7d2d87697fac8fL279

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1671288

Title:
  live_migration_uri to live_migration_scheme SSH settings

Status in OpenStack Compute (nova):
  New

Bug description:
  I saw in the Ocata release notes that live_migration_uri is
  deprecated, and there is mention of a new setting called
  live_migration_scheme. However, the new config option
  live_migration_scheme does not appear in the ocata configuration
  reference[1].

  I am also curious how the live_migration_scheme setting could be used
  to configure a migration URI similar to
  "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa"
  [2] as it seems to only allow setting the scheme to qemu+ssh, but may
  not offer the ability to configure the ssh settings like the key
  location and verification.

  [1] 
https://docs.openstack.org/ocata/config-reference/compute/config-options.html
  [2] 
https://github.com/openstack/openstack-ansible-os_nova/commit/7c9a64b2ed972a605ef51b8f8af29ab2453e4b1c#diff-ca98b38be47a1d270f7d2d87697fac8fL279

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1671288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1601964] [NEW] Policy file distribution breaks security group panel

2016-07-11 Thread Logan V
Public bug reported:

When using POLICY_FILES_PATH and POLICY_FILES with Mitaka Horizon +
Default nova/neutron policy files, the security group CRUD options are
not showing up.

After I add "create_security_group": "" to the neutron policy file, the
create button shows up. This behavior did not occur on Liberty, I just
began seeing it as I started prepping for a Mitaka upgrade.

Horizon @ 9075f06d014e538e8e17af320ef323129aaa3b40
Nova @ 98b38df57bfed3802ce60ee52e4450871fccdbfa
Neutron @ cda226b9da1d4e4b1c045609e3a8352674b772df

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1601964

Title:
  Policy file distribution breaks security group panel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When using POLICY_FILES_PATH and POLICY_FILES with Mitaka Horizon +
  Default nova/neutron policy files, the security group CRUD options are
  not showing up.

  After I add "create_security_group": "" to the neutron policy file,
  the create button shows up. This behavior did not occur on Liberty, I
  just began seeing it as I started prepping for a Mitaka upgrade.

  Horizon @ 9075f06d014e538e8e17af320ef323129aaa3b40
  Nova @ 98b38df57bfed3802ce60ee52e4450871fccdbfa
  Neutron @ cda226b9da1d4e4b1c045609e3a8352674b772df

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1601964/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489194] [NEW] hw_scsi_model from glance image is not used when booting instance from new volume

2015-08-26 Thread Logan V
Public bug reported:

When creating an instance backed by a cinder volume, the disk device /dev/vda 
is used regardless of image settings. I am using the following image metadata 
to set virtio-scsi driver on my instances:
hw_disk_bus=scsi
hw_scsi_model=virtio-scsi

When I boot instances using a normal root device ("boot from image"),
they are using /dev/sda and virtio-scsi as expected. When booting from
volume (either with a new volume or an existing image-based volume),
they use "", ignoring the image
metadata.

According to this spec: http://specs.openstack.org/openstack/nova-
specs/specs/juno/approved/add-virtio-scsi-bus-for-bdm.html

A "work item" was: "Nova retrieve “hw_scsi_model” property from volume’s
glance_image_metadata when booting from cinder volume"

I would expect this work is what would implement setting virtio-scsi on
volume backed instances, however none of the reviews I have looked
through for that spec appear to implement anything regarding volume
backed instances.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489194

Title:
  hw_scsi_model from glance image is not used when booting instance from
  new volume

Status in OpenStack Compute (nova):
  New

Bug description:
  When creating an instance backed by a cinder volume, the disk device /dev/vda 
is used regardless of image settings. I am using the following image metadata 
to set virtio-scsi driver on my instances:
  hw_disk_bus=scsi
  hw_scsi_model=virtio-scsi

  When I boot instances using a normal root device ("boot from image"),
  they are using /dev/sda and virtio-scsi as expected. When booting from
  volume (either with a new volume or an existing image-based volume),
  they use "", ignoring the image
  metadata.

  According to this spec: http://specs.openstack.org/openstack/nova-
  specs/specs/juno/approved/add-virtio-scsi-bus-for-bdm.html

  A "work item" was: "Nova retrieve “hw_scsi_model” property from
  volume’s glance_image_metadata when booting from cinder volume"

  I would expect this work is what would implement setting virtio-scsi
  on volume backed instances, however none of the reviews I have looked
  through for that spec appear to implement anything regarding volume
  backed instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp