[Yahoo-eng-team] [Bug 1603513] [NEW] Nova boot from volume doesn't account for disk correctly
Public bug reported: When you boot a nova instance from a volume (from Cinder) using the following command nova boot --flavor 6 --block-device source=image,id=$SRCIMAGEUUID,dest=volume,size=10,shutdown=remove,bootindex=0 --key-name $KEYNAME testbfv$x The disk accounting is not handled correctly and errors out with this log message 2016-07-13 12:33:30.340 DEBUG nova.scheduler.filters.disk_filter [req- 6a2c44a2-0912-4e2f-8dae-0005b19656ea demo demo] (devstack-mitaka -compute.pm.solidfire.net, devstack-mitaka-compute.pm.solidfire.net) ram: 3215MB disk: 4096MB io_ops: 2 instances: 2 does not have 20480 MB usable disk, it only has 4096.0 MB usable disk. from (pid=28366) host_passes /opt/stack/new/nova/nova/scheduler/filters/disk_filter.py:55 nova hypervisor-show before creating instances (4 @ 10GB each) | free_disk_gb | 44 | Then create 4 (on this hypervisor) instance at 10Gb each nova hypervisor-show shows this: | free_disk_gb | 4 | Since these are Cinder volumes, they should not count against the Hypervisor disks space. This testing was done against Mitaka devstack. ** Affects: nova Importance: Undecided Status: New ** Project changed: cinder => nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1603513 Title: Nova boot from volume doesn't account for disk correctly Status in OpenStack Compute (nova): New Bug description: When you boot a nova instance from a volume (from Cinder) using the following command nova boot --flavor 6 --block-device source=image,id=$SRCIMAGEUUID,dest=volume,size=10,shutdown=remove,bootindex=0 --key-name $KEYNAME testbfv$x The disk accounting is not handled correctly and errors out with this log message 2016-07-13 12:33:30.340 DEBUG nova.scheduler.filters.disk_filter [req- 6a2c44a2-0912-4e2f-8dae-0005b19656ea demo demo] (devstack-mitaka -compute.pm.solidfire.net, devstack-mitaka-compute.pm.solidfire.net) ram: 3215MB disk: 4096MB io_ops: 2 instances: 2 does not have 20480 MB usable disk, it only has 4096.0 MB usable disk. from (pid=28366) host_passes /opt/stack/new/nova/nova/scheduler/filters/disk_filter.py:55 nova hypervisor-show before creating instances (4 @ 10GB each) | free_disk_gb | 44 | Then create 4 (on this hypervisor) instance at 10Gb each nova hypervisor-show shows this: | free_disk_gb | 4 | Since these are Cinder volumes, they should not count against the Hypervisor disks space. This testing was done against Mitaka devstack. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1603513/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1525439] [NEW] Glance V2 API is not backwards compatible and breaks Cinder solidfire driver
Public bug reported: In stable/kilo Glance API V2 change of image-metadata is_public flag to visibility = Public breaks the SolidFire (and maybe other, NetApp?) drivers that depend on is_public flag. Specifically this breaks the ability efficiently handle images by caching images in the SolidFire cluster. Changing the API back to V1 through the cinder.conf file then breaks Ceph which depends on V2 and the image-metadata direct_url and locations to determine if it can clone a image to a volume. So this breaks Ceph's ability to efficiently handle images This version mismatch does not allow for SolidFire and Ceph to both be used efficiently in the same OpenStack cloud. NOTE: openstack/puppet-cinder defaults to glance-api-version = 2 which allows Ceph efficientcy to work and not SolidFire (and others). Mainly Opening this Bug to document this problem since no changes are allowed to Kilo there is probably no way to fix this. Code locations: cinder/cinder/image/glance.py line 250-256 cinder/cinder/volume/drivers/rbd.py line 827 cinder/cinder/volume/drivers/solidfire.py line 647 puppet-cinder/manifests/glance.pp line 59 ** Affects: cinder Importance: Undecided Assignee: John Griffith (john-griffith) Status: Triaged ** Affects: glance Importance: Undecided Status: New ** Affects: puppet-cinder Importance: Undecided Status: New ** Tags: cinder puppet ** Also affects: cinder Importance: Undecided Status: New ** Also affects: puppet-cinder Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1525439 Title: Glance V2 API is not backwards compatible and breaks Cinder solidfire driver Status in Cinder: Triaged Status in Glance: New Status in puppet-cinder: New Bug description: In stable/kilo Glance API V2 change of image-metadata is_public flag to visibility = Public breaks the SolidFire (and maybe other, NetApp?) drivers that depend on is_public flag. Specifically this breaks the ability efficiently handle images by caching images in the SolidFire cluster. Changing the API back to V1 through the cinder.conf file then breaks Ceph which depends on V2 and the image-metadata direct_url and locations to determine if it can clone a image to a volume. So this breaks Ceph's ability to efficiently handle images This version mismatch does not allow for SolidFire and Ceph to both be used efficiently in the same OpenStack cloud. NOTE: openstack/puppet-cinder defaults to glance-api-version = 2 which allows Ceph efficientcy to work and not SolidFire (and others). Mainly Opening this Bug to document this problem since no changes are allowed to Kilo there is probably no way to fix this. Code locations: cinder/cinder/image/glance.py line 250-256 cinder/cinder/volume/drivers/rbd.py line 827 cinder/cinder/volume/drivers/solidfire.py line 647 puppet-cinder/manifests/glance.pp line 59 To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1525439/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp