Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Paras pradhan
how do we install virtio drivers if its missing? How do I verify it on the
centos cloud image if its there?

On Tue, Aug 1, 2017 at 12:19 PM, Abel Lopez  wrote:

> Your custom image is likely missing the virtIO drivers that the cloud
> image has.
>
> Instead of running through the DVD installer, I'd suggest checking out
> diskimage-builder to make custom images for use on Openstack.
>
> On Tue, Aug 1, 2017 at 10:16 AM Paras pradhan 
> wrote:
>
>> Also this is what I've noticed with the centos cloud image I downloaded.
>> If I add few packages( around a GB), the sizes goes up to 8GB from 1.3 GB.
>> Running dd if=/dev/zero of=temp;sync;rm temp failed due to the size of
>> the disk on the cloud images which is 10G. zeroing failed with No space
>> left on device.  Any other options?
>>
>> Thanks
>> Paras.
>>
>> On Tue, Aug 1, 2017 at 9:43 AM, Paras pradhan 
>> wrote:
>>
>>> I ran virt-sparsify. So before running virt-sparsify it was 6.0GB and
>>> after it is 1.3GB.
>>>
>>> Thanks
>>> Paras.
>>>
>>> On Tue, Aug 1, 2017 at 2:53 AM, Tomáš Vondra 
>>> wrote:
>>>
 Hi!

 How big are the actual image files? Because qcow2 is a sparse format,
 it does not store zeroes. If the free space in one image is zeroed out, it
 will convert much faster. If that is the problem, use „dd if=/dev/zero
 of=temp;sync;rm temp“ or zerofree.

 Tomas



 *From:* Paras pradhan [mailto:pradhanpa...@gmail.com]
 *Sent:* Monday, July 31, 2017 11:54 PM
 *To:* openstack-operators@lists.openstack.org
 *Subject:* [Openstack-operators] custom build image is slow



 Hello



 I have two qcow2 images uploaded to glance. One is CentOS 7 cloud image
 downloaded from centos.org.  The other one is custom built using
 CentOS 7.DVD.  When I create cinder volumes from them, volume creation from
 the custom built image it is very very slow.





 CenOS qcow2:



 2017-07-31 21:42:44.287 881609 INFO cinder.image.image_utils
 [req-ea2d7b12-ae9e-45b2-8b4b-ea8465497d5a
 e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
 default] *Converted 8192.00 MB image at 253.19 MB/s*



 Custom built qcow2:

 INFO cinder.image.image_utils [req-032292d8-1500-474d-95c7-2e8424e2b864
 e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
 default] *Converted 10240.00 MB image at 32.22 MB/s*



 I used the following command to create the qcow2 file

 qemu-img create -f qcow2 custom.qcow2 10G



 What am I missing ?



 Thanks
 Paras.





>>>
>>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Thanks for the info. Might have something to do with the Ceph version then.
We're running hammer and apparently the du option wasn't added until
in Infernalis.

John Petrini

On Tue, Aug 1, 2017 at 4:32 PM, Mike Lowe  wrote:

> Two things, first info does not show how much disk is used du does.
> Second, the semantics count, copy is different than clone and flatten.
> Clone and flatten which should happen if you have things working correctly
> is much faster than copy.  If you are using copy then you may be limited by
> the number of management ops in flight, this is a setting for more recent
> versions of ceph.  I don’t know if copy skips zero byte objects but clone
> and flatten certainly do.  You need to be sure that you have the proper
> settings in nova.conf for discard/unmap as well as using
> hw_scsi_model=virtio-scsi and hw_disk_bus=scsi in the image properties.
> Once discard is working and you have the qemu guest agent running in your
> instances you can force them to do a fstrim to reclaim space as an
> additional benefit.
>
> On Aug 1, 2017, at 3:50 PM, John Petrini  wrote:
>
> Maybe I'm just not understanding but when I create a nova snapshot the
> snapshot happens at RBD in the ephemeral pool and then it's copied to the
> images pool. This results in a full sized image rather than a snapshot with
> a reference to the parent.
>
> For example below is a snapshot of an ephemeral instance from our images
> pool. It's 80GB, the size of the instance, so rather than just capturing
> the state of the parent image I end up with a brand new image of the same
> size. It takes a long time to create this copy and causes high IO during
> the snapshot.
>
> rbd --pool images info d5404709-cb86-4743-b3d5-1dc7fba836c1
> rbd image 'd5404709-cb86-4743-b3d5-1dc7fba836c1':
> size 81920 MB in 20480 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.93cdd43ca5efa8
> format: 2
> features: layering, striping
> flags:
> stripe unit: 4096 kB
> stripe count: 1
>
>
> John Petrini
>
> On Tue, Aug 1, 2017 at 3:24 PM, Mike Lowe  wrote:
>
>> There is no upload if you use Ceph to back your glance (like you should),
>> the snapshot is cloned from the ephemeral pool into the the images pool,
>> then flatten is run as a background task.  Net result is that creating a
>> 120GB image vs 8GB is slightly faster on my cloud but not at all what I’d
>> call painful.
>>
>> Running nova image-create for a 8GB image:
>>
>> real 0m2.712s
>> user 0m0.761s
>> sys 0m0.225s
>>
>> Running nova image-create for a 128GB image:
>>
>> real 0m2.436s
>> user 0m0.774s
>> sys 0m0.225s
>>
>>
>>
>>
>> On Aug 1, 2017, at 3:07 PM, John Petrini  wrote:
>>
>> Yes from Mitaka onward the snapshot happens at the RBD level which is
>> fast. It's the flattening and uploading of the image to glance that's the
>> major pain point. Still it's worlds better than the qemu snapshots to the
>> local disk prior to Mitaka.
>>
>> John Petrini
>>
>> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
>> Twitter]    [image: LinkedIn]
>>    [image: Google Plus]
>>    [image: Blog]
>> 
>> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
>> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
>> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
>> 
>>
>> The information transmitted is intended only for the person or entity to
>> which it is addressed and may contain confidential and/or privileged
>> material. Any review, retransmission,  dissemination or other use of, or
>> taking of any action in reliance upon, this information by persons or
>> entities other than the intended recipient is prohibited. If you received
>> this in error, please contact the sender and delete the material from any
>> computer.
>>
>>
>>
>> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe  wrote:
>>
>>> Strictly speaking I don’t think this is the case anymore for Mitaka or
>>> later.  Snapping nova does take more space as the image is flattened, but
>>> the dumb download then upload back into ceph has been cut out.  With
>>> careful attention paid to discard/TRIM I believe you can maintain the thin
>>> provisioning properties of RBD.  The workflow is explained here.
>>> https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova
>>> -snapshots-on-ceph-rbd/
>>>
>>> On Aug 1, 2017, at 11:14 AM, John Petrini  wrote:
>>>
>>> Just my two cents here but we started out using mostly Ephemeral storage
>>> in our builds and looking back I wish we hadn't. Note we're using Ceph as a
>>> backend so my response is tailored towards Ceph's behavior.
>>>
>>> The major pain point is snapshots. When you snapshot an nova volume an
>>> RBD 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Chris Friesen

On 08/01/2017 02:32 PM, Mike Lowe wrote:

Two things, first info does not show how much disk is used du does.  Second, the
semantics count, copy is different than clone and flatten.  Clone and flatten
which should happen if you have things working correctly is much faster than
copy.  If you are using copy then you may be limited by the number of management
ops in flight, this is a setting for more recent versions of ceph.  I don’t know
if copy skips zero byte objects but clone and flatten certainly do.  You need to
be sure that you have the proper settings in nova.conf for discard/unmap as well
as using hw_scsi_model=virtio-scsi and hw_disk_bus=scsi in the image properties.
  Once discard is working and you have the qemu guest agent running in your
instances you can force them to do a fstrim to reclaim space as an additional
benefit.



Just a heads-up...with virtio-scsi there is a bug where you cannot boot from 
volume and then attach another volume.


(The bug is 1702999, though it's possible the fix for 1686116 will address it in 
which case it'd be fixed in pike.)


Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] OpenStack Operators mid-cycle meet up #1 2018 - venue selected!

2017-08-01 Thread Chris Morgan
Dear All,
  The ops meet ups team today confirmed the NTT proposal to host the 1st
2018 OpenStack Operators mid-cycle meet up in Tokyo March 7th,8th. As a
reminder, that is as proposed here:

https://etherpad.openstack.org/p/ops-meetup-venue-discuss-1st-2018

The vote was unanimous, this proposal looks great.

Here are the meeting minutes:

Meeting ended Tue Aug 1 15:00:54 2017 UTC. Information about MeetBot at
http://wiki.debian.org/MeetBot . (v 0.1.4)
11:01 AM Minutes:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-08-01-14.00.html
11:01 AM O Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-08-01-14.00.txt
11:01 AM Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-08-01-14.00.log.html

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Lowe
Two things, first info does not show how much disk is used du does.  Second, 
the semantics count, copy is different than clone and flatten.  Clone and 
flatten which should happen if you have things working correctly is much faster 
than copy.  If you are using copy then you may be limited by the number of 
management ops in flight, this is a setting for more recent versions of ceph.  
I don’t know if copy skips zero byte objects but clone and flatten certainly 
do.  You need to be sure that you have the proper settings in nova.conf for 
discard/unmap as well as using hw_scsi_model=virtio-scsi and hw_disk_bus=scsi 
in the image properties.  Once discard is working and you have the qemu guest 
agent running in your instances you can force them to do a fstrim to reclaim 
space as an additional benefit.

> On Aug 1, 2017, at 3:50 PM, John Petrini  wrote:
> 
> Maybe I'm just not understanding but when I create a nova snapshot the 
> snapshot happens at RBD in the ephemeral pool and then it's copied to the 
> images pool. This results in a full sized image rather than a snapshot with a 
> reference to the parent.
> 
> For example below is a snapshot of an ephemeral instance from our images 
> pool. It's 80GB, the size of the instance, so rather than just capturing the 
> state of the parent image I end up with a brand new image of the same size. 
> It takes a long time to create this copy and causes high IO during the 
> snapshot.
> 
> rbd --pool images info d5404709-cb86-4743-b3d5-1dc7fba836c1
> rbd image 'd5404709-cb86-4743-b3d5-1dc7fba836c1':
>   size 81920 MB in 20480 objects
>   order 22 (4096 kB objects)
>   block_name_prefix: rbd_data.93cdd43ca5efa8
>   format: 2
>   features: layering, striping
>   flags: 
>   stripe unit: 4096 kB
>   stripe count: 1
> 
> 
> John Petrini
> 
> 
> On Tue, Aug 1, 2017 at 3:24 PM, Mike Lowe  > wrote:
> There is no upload if you use Ceph to back your glance (like you should), the 
> snapshot is cloned from the ephemeral pool into the the images pool, then 
> flatten is run as a background task.  Net result is that creating a 120GB 
> image vs 8GB is slightly faster on my cloud but not at all what I’d call 
> painful.
> 
> Running nova image-create for a 8GB image:
> 
> real  0m2.712s
> user  0m0.761s
> sys   0m0.225s
> 
> Running nova image-create for a 128GB image:
> 
> real  0m2.436s
> user  0m0.774s
> sys   0m0.225s
> 
> 
> 
> 
>> On Aug 1, 2017, at 3:07 PM, John Petrini > > wrote:
>> 
>> Yes from Mitaka onward the snapshot happens at the RBD level which is fast. 
>> It's the flattening and uploading of the image to glance that's the major 
>> pain point. Still it's worlds better than the qemu snapshots to the local 
>> disk prior to Mitaka.
>> 
>> John Petrini
>> 
>> Platforms Engineer   //   CoreDial, LLC   //   coredial.com 
>>    //
>> 
>> 
>>  
>> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
>> P: 215.297.4400 x232    //   F: 215.297.4401 
>>    //   E: jpetr...@coredial.com 
>> 
>>  
>> 
>> The information transmitted is intended only for the person or entity to 
>> which it is addressed and may contain confidential and/or privileged 
>> material. Any review, retransmission,  dissemination or other use of, or 
>> taking of any action in reliance upon, this information by persons or 
>> entities other than the intended recipient is prohibited. If you received 
>> this in error, please contact the sender and delete the material from any 
>> computer.
>> 
>>  
>> 
>> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe > > wrote:
>> Strictly speaking I don’t think this is the case anymore for Mitaka or 
>> later.  Snapping nova does take more space as the image is flattened, but 
>> the dumb download then upload back into ceph has been cut out.  With careful 
>> attention paid to discard/TRIM I believe you can maintain the thin 
>> provisioning properties of RBD.  The workflow is explained here.  
>> https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
>>  
>> 
>> 
>>> On Aug 1, 2017, at 11:14 AM, John Petrini >> > wrote:
>>> 
>>> Just my two cents here but we started out using mostly Ephemeral storage in 
>>> our builds and looking back I wish we hadn't. Note we're using Ceph as a 
>>> backend so my response is tailored towards Ceph's behavior.
>>> 
>>> The major pain point is snapshots. When 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Matt Riedemann

On 8/1/2017 10:47 AM, Sean McGinnis wrote:

Some sort of good news there. Starting with the Pike release, you will now
be able to extend an attached volume. As long as both Cinder and Nova are
at Pike or later, this should now be allowed.


And you're using the libvirt compute driver in Nova, and the volume type 
is iscsi or fibrechannel...


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Maybe I'm just not understanding but when I create a nova snapshot the
snapshot happens at RBD in the ephemeral pool and then it's copied to the
images pool. This results in a full sized image rather than a snapshot with
a reference to the parent.

For example below is a snapshot of an ephemeral instance from our images
pool. It's 80GB, the size of the instance, so rather than just capturing
the state of the parent image I end up with a brand new image of the same
size. It takes a long time to create this copy and causes high IO during
the snapshot.

rbd --pool images info d5404709-cb86-4743-b3d5-1dc7fba836c1
rbd image 'd5404709-cb86-4743-b3d5-1dc7fba836c1':
size 81920 MB in 20480 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.93cdd43ca5efa8
format: 2
features: layering, striping
flags:
stripe unit: 4096 kB
stripe count: 1


John Petrini

On Tue, Aug 1, 2017 at 3:24 PM, Mike Lowe  wrote:

> There is no upload if you use Ceph to back your glance (like you should),
> the snapshot is cloned from the ephemeral pool into the the images pool,
> then flatten is run as a background task.  Net result is that creating a
> 120GB image vs 8GB is slightly faster on my cloud but not at all what I’d
> call painful.
>
> Running nova image-create for a 8GB image:
>
> real 0m2.712s
> user 0m0.761s
> sys 0m0.225s
>
> Running nova image-create for a 128GB image:
>
> real 0m2.436s
> user 0m0.774s
> sys 0m0.225s
>
>
>
>
> On Aug 1, 2017, at 3:07 PM, John Petrini  wrote:
>
> Yes from Mitaka onward the snapshot happens at the RBD level which is
> fast. It's the flattening and uploading of the image to glance that's the
> major pain point. Still it's worlds better than the qemu snapshots to the
> local disk prior to Mitaka.
>
> John Petrini
>
> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
> Twitter]    [image: LinkedIn]
>    [image: Google Plus]
>    [image: Blog]
> 
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
> 
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission,  dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
>
>
> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe  wrote:
>
>> Strictly speaking I don’t think this is the case anymore for Mitaka or
>> later.  Snapping nova does take more space as the image is flattened, but
>> the dumb download then upload back into ceph has been cut out.  With
>> careful attention paid to discard/TRIM I believe you can maintain the thin
>> provisioning properties of RBD.  The workflow is explained here.
>> https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova
>> -snapshots-on-ceph-rbd/
>>
>> On Aug 1, 2017, at 11:14 AM, John Petrini  wrote:
>>
>> Just my two cents here but we started out using mostly Ephemeral storage
>> in our builds and looking back I wish we hadn't. Note we're using Ceph as a
>> backend so my response is tailored towards Ceph's behavior.
>>
>> The major pain point is snapshots. When you snapshot an nova volume an
>> RBD snapshot occurs and is very quick and uses very little additional
>> storage, however the snapshot is then copied into the images pool and in
>> the process is converted from a snapshot to a full size image. This takes a
>> long time because you have to copy a lot of data and it takes up a lot of
>> space. It also causes a great deal of IO on the storage and means you end
>> up with a bunch of "snapshot images" creating clutter. On the other hand
>> volume snapshots are near instantaneous without the other drawbacks I've
>> mentioned.
>>
>> On the plus side for ephemeral storage; resizing the root disk of images
>> works better. As long as your image is configured properly it's just a
>> matter of initiating a resize and letting the instance reboot to grow the
>> root disk. When using volumes as your root disk you instead have to
>> shutdown the instance, grow the volume and boot.
>>
>> I hope this help! If anyone on the list knows something I don't know
>> regarding these issues please chime in. I'd love to know if there's a
>> better way.
>>
>> Regards,
>>
>> John Petrini
>>
>> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad <
>> conrad.kimb...@boeing.com> wrote:
>>
>>> In our process of standing up an OpenStack internal cloud we are 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Lowe
There is no upload if you use Ceph to back your glance (like you should), the 
snapshot is cloned from the ephemeral pool into the the images pool, then 
flatten is run as a background task.  Net result is that creating a 120GB image 
vs 8GB is slightly faster on my cloud but not at all what I’d call painful.

Running nova image-create for a 8GB image:

real0m2.712s
user0m0.761s
sys 0m0.225s

Running nova image-create for a 128GB image:

real0m2.436s
user0m0.774s
sys 0m0.225s




> On Aug 1, 2017, at 3:07 PM, John Petrini  wrote:
> 
> Yes from Mitaka onward the snapshot happens at the RBD level which is fast. 
> It's the flattening and uploading of the image to glance that's the major 
> pain point. Still it's worlds better than the qemu snapshots to the local 
> disk prior to Mitaka.
> 
> John Petrini
> 
> Platforms Engineer   //   CoreDial, LLC   //   coredial.com 
>    //
> 
> 
>  
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> P: 215.297.4400 x232   //   F: 215.297.4401   //   E: jpetr...@coredial.com 
> 
>  
> 
> The information transmitted is intended only for the person or entity to 
> which it is addressed and may contain confidential and/or privileged 
> material. Any review, retransmission,  dissemination or other use of, or 
> taking of any action in reliance upon, this information by persons or 
> entities other than the intended recipient is prohibited. If you received 
> this in error, please contact the sender and delete the material from any 
> computer.
> 
>  
> 
> 
> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe  > wrote:
> Strictly speaking I don’t think this is the case anymore for Mitaka or later. 
>  Snapping nova does take more space as the image is flattened, but the dumb 
> download then upload back into ceph has been cut out.  With careful attention 
> paid to discard/TRIM I believe you can maintain the thin provisioning 
> properties of RBD.  The workflow is explained here.  
> https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
>  
> 
> 
>> On Aug 1, 2017, at 11:14 AM, John Petrini > > wrote:
>> 
>> Just my two cents here but we started out using mostly Ephemeral storage in 
>> our builds and looking back I wish we hadn't. Note we're using Ceph as a 
>> backend so my response is tailored towards Ceph's behavior.
>> 
>> The major pain point is snapshots. When you snapshot an nova volume an RBD 
>> snapshot occurs and is very quick and uses very little additional storage, 
>> however the snapshot is then copied into the images pool and in the process 
>> is converted from a snapshot to a full size image. This takes a long time 
>> because you have to copy a lot of data and it takes up a lot of space. It 
>> also causes a great deal of IO on the storage and means you end up with a 
>> bunch of "snapshot images" creating clutter. On the other hand volume 
>> snapshots are near instantaneous without the other drawbacks I've mentioned.
>> 
>> On the plus side for ephemeral storage; resizing the root disk of images 
>> works better. As long as your image is configured properly it's just a 
>> matter of initiating a resize and letting the instance reboot to grow the 
>> root disk. When using volumes as your root disk you instead have to shutdown 
>> the instance, grow the volume and boot.
>> 
>> I hope this help! If anyone on the list knows something I don't know 
>> regarding these issues please chime in. I'd love to know if there's a better 
>> way.
>> 
>> Regards,
>> John Petrini
>> 
>> 
>> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad > > wrote:
>> In our process of standing up an OpenStack internal cloud we are facing the 
>> question of ephemeral storage vs. Cinder volumes for instance root disks.
>> 
>>  
>> 
>> As I look at public clouds such as AWS and Azure, the norm is to use 
>> persistent volumes for the root disk.  AWS started out with images booting 
>> onto ephemeral disk, but soon after they released Elastic Block Storage and 
>> ever since the clear trend has been to EBS-backed instances, and now when I 
>> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And 
>> I’m not even sure one can have anything except persistent root disks in 
>> Azure VMs.
>> 
>>  
>> 
>> Based on this and a number of other factors I think we want our user normal 
>> / default behavior to boot onto Cinder-backed volumes instead of onto 
>> ephemeral storage.  But then I 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Yes from Mitaka onward the snapshot happens at the RBD level which is fast.
It's the flattening and uploading of the image to glance that's the major
pain point. Still it's worlds better than the qemu snapshots to the local
disk prior to Mitaka.

John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission,  dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.



On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe  wrote:

> Strictly speaking I don’t think this is the case anymore for Mitaka or
> later.  Snapping nova does take more space as the image is flattened, but
> the dumb download then upload back into ceph has been cut out.  With
> careful attention paid to discard/TRIM I believe you can maintain the thin
> provisioning properties of RBD.  The workflow is explained here.
> https://www.sebastien-han.fr/blog/2015/10/05/openstack-
> nova-snapshots-on-ceph-rbd/
>
> On Aug 1, 2017, at 11:14 AM, John Petrini  wrote:
>
> Just my two cents here but we started out using mostly Ephemeral storage
> in our builds and looking back I wish we hadn't. Note we're using Ceph as a
> backend so my response is tailored towards Ceph's behavior.
>
> The major pain point is snapshots. When you snapshot an nova volume an RBD
> snapshot occurs and is very quick and uses very little additional storage,
> however the snapshot is then copied into the images pool and in the process
> is converted from a snapshot to a full size image. This takes a long time
> because you have to copy a lot of data and it takes up a lot of space. It
> also causes a great deal of IO on the storage and means you end up with a
> bunch of "snapshot images" creating clutter. On the other hand volume
> snapshots are near instantaneous without the other drawbacks I've mentioned.
>
> On the plus side for ephemeral storage; resizing the root disk of images
> works better. As long as your image is configured properly it's just a
> matter of initiating a resize and letting the instance reboot to grow the
> root disk. When using volumes as your root disk you instead have to
> shutdown the instance, grow the volume and boot.
>
> I hope this help! If anyone on the list knows something I don't know
> regarding these issues please chime in. I'd love to know if there's a
> better way.
>
> Regards,
>
> John Petrini
>
> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad <
> conrad.kimb...@boeing.com> wrote:
>
>> In our process of standing up an OpenStack internal cloud we are facing
>> the question of ephemeral storage vs. Cinder volumes for instance root
>> disks.
>>
>>
>>
>> As I look at public clouds such as AWS and Azure, the norm is to use
>> persistent volumes for the root disk.  AWS started out with images booting
>> onto ephemeral disk, but soon after they released Elastic Block Storage and
>> ever since the clear trend has been to EBS-backed instances, and now when I
>> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And
>> I’m not even sure one can have anything except persistent root disks in
>> Azure VMs.
>>
>>
>>
>> Based on this and a number of other factors I think we want our user
>> normal / default behavior to boot onto Cinder-backed volumes instead of
>> onto ephemeral storage.  But then I look at OpenStack and its design point
>> appears to be booting images onto ephemeral storage, and while it is
>> possible to boot an image onto a new volume this is clumsy (haven’t found a
>> way to make this the default behavior) and we are experiencing performance
>> problems (that admittedly we have not yet run to ground).
>>
>>
>>
>> So …
>>
>> · Are other operators routinely booting onto Cinder volumes
>> instead of ephemeral storage?
>>
>> · What has been your experience with this; any advice?
>>
>>
>>
>> *Conrad Kimball*
>>
>> Associate Technical Fellow
>>
>> Chief Architect, Enterprise Cloud Services
>>
>> Application Infrastructure Services / Global IT Infrastructure /
>> Information Technology & Data Analytics
>>
>> conrad.kimb...@boeing.com
>>
>> P.O. Box 3707, Mail Code 7M-TE
>>
>> Seattle, WA  98124-2207
>>
>> Bellevue 33-11 bldg, office 3A6-3.9
>>
>> 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Lowe
Strictly speaking I don’t think this is the case anymore for Mitaka or later.  
Snapping nova does take more space as the image is flattened, but the dumb 
download then upload back into ceph has been cut out.  With careful attention 
paid to discard/TRIM I believe you can maintain the thin provisioning 
properties of RBD.  The workflow is explained here.  
https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
 


> On Aug 1, 2017, at 11:14 AM, John Petrini  wrote:
> 
> Just my two cents here but we started out using mostly Ephemeral storage in 
> our builds and looking back I wish we hadn't. Note we're using Ceph as a 
> backend so my response is tailored towards Ceph's behavior.
> 
> The major pain point is snapshots. When you snapshot an nova volume an RBD 
> snapshot occurs and is very quick and uses very little additional storage, 
> however the snapshot is then copied into the images pool and in the process 
> is converted from a snapshot to a full size image. This takes a long time 
> because you have to copy a lot of data and it takes up a lot of space. It 
> also causes a great deal of IO on the storage and means you end up with a 
> bunch of "snapshot images" creating clutter. On the other hand volume 
> snapshots are near instantaneous without the other drawbacks I've mentioned.
> 
> On the plus side for ephemeral storage; resizing the root disk of images 
> works better. As long as your image is configured properly it's just a matter 
> of initiating a resize and letting the instance reboot to grow the root disk. 
> When using volumes as your root disk you instead have to shutdown the 
> instance, grow the volume and boot.
> 
> I hope this help! If anyone on the list knows something I don't know 
> regarding these issues please chime in. I'd love to know if there's a better 
> way.
> 
> Regards,
> John Petrini
> 
> 
> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad  > wrote:
> In our process of standing up an OpenStack internal cloud we are facing the 
> question of ephemeral storage vs. Cinder volumes for instance root disks.
> 
>  
> 
> As I look at public clouds such as AWS and Azure, the norm is to use 
> persistent volumes for the root disk.  AWS started out with images booting 
> onto ephemeral disk, but soon after they released Elastic Block Storage and 
> ever since the clear trend has been to EBS-backed instances, and now when I 
> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And 
> I’m not even sure one can have anything except persistent root disks in Azure 
> VMs.
> 
>  
> 
> Based on this and a number of other factors I think we want our user normal / 
> default behavior to boot onto Cinder-backed volumes instead of onto ephemeral 
> storage.  But then I look at OpenStack and its design point appears to be 
> booting images onto ephemeral storage, and while it is possible to boot an 
> image onto a new volume this is clumsy (haven’t found a way to make this the 
> default behavior) and we are experiencing performance problems (that 
> admittedly we have not yet run to ground).
> 
>  
> 
> So …
> 
> · Are other operators routinely booting onto Cinder volumes instead 
> of ephemeral storage?
> 
> · What has been your experience with this; any advice?
> 
>  
> 
> Conrad Kimball
> 
> Associate Technical Fellow
> 
> Chief Architect, Enterprise Cloud Services
> 
> Application Infrastructure Services / Global IT Infrastructure / Information 
> Technology & Data Analytics
> 
> conrad.kimb...@boeing.com 
> P.O. Box 3707, Mail Code 7M-TE
> 
> Seattle, WA  98124-2207
> 
> Bellevue 33-11 bldg, office 3A6-3.9
> 
> Mobile:  425-591-7802 
>  
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [puppet] PTL wanted

2017-08-01 Thread Emilien Macchi
It's an unusual request but we need a new PTL for Queens.
Alex Schultz and I have been leading Puppet OpenStack modules for some
time now and it's time to rotate.
We know you're out there consuming (and contributing) to the modules -
if you want this project to survive, it's time to step-up and give
some help.

Basically, we have enough reviewers for all the patches that come in -
no worries on this side.
What we need is a name to be the official PTL for Queens. The Puppet
OpenStack PTL is responsible of making sure we release on time (Alex
wrote scripts to release modules in one click so no worries) and have
rooms at PTG if needed (we didn't request one for Denver, not sure
we'll need in the future).
We don't have weekly meeting anymore, so really the workload isn't bad at all.

Since I've decided to not step up for PTL during Queens, what's going
to happen is that:
- option 1: Someone volunteers to learn and do it, with the full
support from me and Alex.
- option 2: Alex will step up because he's nice - but it's adding bits
in his (already too busy) buffer.
- option 3: I'll step up, but will make strict minimum to meet
requirements from TC.
- option 4: Set the project in maintenance mode.
- option 5: Remove the project from OpenStack governance.

I hope we can have volunteers in our great community, thanks in advance!
-- 
Emilien Macchi

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Abel Lopez
Your custom image is likely missing the virtIO drivers that the cloud image
has.

Instead of running through the DVD installer, I'd suggest checking out
diskimage-builder to make custom images for use on Openstack.

On Tue, Aug 1, 2017 at 10:16 AM Paras pradhan 
wrote:

> Also this is what I've noticed with the centos cloud image I downloaded.
> If I add few packages( around a GB), the sizes goes up to 8GB from 1.3 GB.
> Running dd if=/dev/zero of=temp;sync;rm temp failed due to the size of
> the disk on the cloud images which is 10G. zeroing failed with No space
> left on device.  Any other options?
>
> Thanks
> Paras.
>
> On Tue, Aug 1, 2017 at 9:43 AM, Paras pradhan 
> wrote:
>
>> I ran virt-sparsify. So before running virt-sparsify it was 6.0GB and
>> after it is 1.3GB.
>>
>> Thanks
>> Paras.
>>
>> On Tue, Aug 1, 2017 at 2:53 AM, Tomáš Vondra 
>> wrote:
>>
>>> Hi!
>>>
>>> How big are the actual image files? Because qcow2 is a sparse format, it
>>> does not store zeroes. If the free space in one image is zeroed out, it
>>> will convert much faster. If that is the problem, use „dd if=/dev/zero
>>> of=temp;sync;rm temp“ or zerofree.
>>>
>>> Tomas
>>>
>>>
>>>
>>> *From:* Paras pradhan [mailto:pradhanpa...@gmail.com]
>>> *Sent:* Monday, July 31, 2017 11:54 PM
>>> *To:* openstack-operators@lists.openstack.org
>>> *Subject:* [Openstack-operators] custom build image is slow
>>>
>>>
>>>
>>> Hello
>>>
>>>
>>>
>>> I have two qcow2 images uploaded to glance. One is CentOS 7 cloud image
>>> downloaded from centos.org.  The other one is custom built using CentOS
>>> 7.DVD.  When I create cinder volumes from them, volume creation from the
>>> custom built image it is very very slow.
>>>
>>>
>>>
>>>
>>>
>>> CenOS qcow2:
>>>
>>>
>>>
>>> 2017-07-31 21:42:44.287 881609 INFO cinder.image.image_utils
>>> [req-ea2d7b12-ae9e-45b2-8b4b-ea8465497d5a
>>> e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
>>> 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
>>> default] *Converted 8192.00 MB image at 253.19 MB/s*
>>>
>>>
>>>
>>> Custom built qcow2:
>>>
>>> INFO cinder.image.image_utils [req-032292d8-1500-474d-95c7-2e8424e2b864
>>> e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
>>> 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
>>> default] *Converted 10240.00 MB image at 32.22 MB/s*
>>>
>>>
>>>
>>> I used the following command to create the qcow2 file
>>>
>>> qemu-img create -f qcow2 custom.qcow2 10G
>>>
>>>
>>>
>>> What am I missing ?
>>>
>>>
>>>
>>> Thanks
>>> Paras.
>>>
>>>
>>>
>>>
>>>
>>
>>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Paras pradhan
Also this is what I've noticed with the centos cloud image I downloaded. If
I add few packages( around a GB), the sizes goes up to 8GB from 1.3 GB.
Running dd if=/dev/zero of=temp;sync;rm temp failed due to the size of the
disk on the cloud images which is 10G. zeroing failed with No space left on
device.  Any other options?

Thanks
Paras.

On Tue, Aug 1, 2017 at 9:43 AM, Paras pradhan 
wrote:

> I ran virt-sparsify. So before running virt-sparsify it was 6.0GB and
> after it is 1.3GB.
>
> Thanks
> Paras.
>
> On Tue, Aug 1, 2017 at 2:53 AM, Tomáš Vondra 
> wrote:
>
>> Hi!
>>
>> How big are the actual image files? Because qcow2 is a sparse format, it
>> does not store zeroes. If the free space in one image is zeroed out, it
>> will convert much faster. If that is the problem, use „dd if=/dev/zero
>> of=temp;sync;rm temp“ or zerofree.
>>
>> Tomas
>>
>>
>>
>> *From:* Paras pradhan [mailto:pradhanpa...@gmail.com]
>> *Sent:* Monday, July 31, 2017 11:54 PM
>> *To:* openstack-operators@lists.openstack.org
>> *Subject:* [Openstack-operators] custom build image is slow
>>
>>
>>
>> Hello
>>
>>
>>
>> I have two qcow2 images uploaded to glance. One is CentOS 7 cloud image
>> downloaded from centos.org.  The other one is custom built using CentOS
>> 7.DVD.  When I create cinder volumes from them, volume creation from the
>> custom built image it is very very slow.
>>
>>
>>
>>
>>
>> CenOS qcow2:
>>
>>
>>
>> 2017-07-31 21:42:44.287 881609 INFO cinder.image.image_utils
>> [req-ea2d7b12-ae9e-45b2-8b4b-ea8465497d5a e090e605170a778610438bfabad7aa
>> 7764d0a77ef520ae392e2b59074c9f88cf 490910c1d4e1486d8e3a62d7c0ae698e -
>> d67a18e70dd9467db25b74d33feaad6d default] *Converted 8192.00 MB image at
>> 253.19 MB/s*
>>
>>
>>
>> Custom built qcow2:
>>
>> INFO cinder.image.image_utils [req-032292d8-1500-474d-95c7-2e8424e2b864
>> e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
>> 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
>> default] *Converted 10240.00 MB image at 32.22 MB/s*
>>
>>
>>
>> I used the following command to create the qcow2 file
>>
>> qemu-img create -f qcow2 custom.qcow2 10G
>>
>>
>>
>> What am I missing ?
>>
>>
>>
>> Thanks
>> Paras.
>>
>>
>>
>>
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Sean McGinnis
> 
> >·What has been your experience with this; any advice?
> 
> It works fine.  With Horizon you can do it in one step (select the image but
> tell it to boot from volume) but with the CLI I think you need two steps
> (make the volume from the image, then boot from the volume).  The extra
> steps are a moot point if you are booting programmatically (from a custom
> script or something like heat).
> 

One thing to keep in mind when using Horizon for this - there's currently
no way in Horizon to specify the volume type you would like to use for
creating this boot volume. So it will always only use the default volume
type.

That may be fine if you only have one, but if you have multiple backends,
or multiple settings controlled by volume types, then you will probably
want to use the CLI method for creating your boot volumes.

There has been some discussion about creating a Nova driver to just use
Cinder for ephemeral storage. There are some design challenges with how
to best implement that, but if operators are interested, it would be
great to hear that at the Forum and elsewhere so we can help raise the
priority of that between teams.

Sean

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Jay Pipes

On 08/01/2017 11:14 AM, John Petrini wrote:
Just my two cents here but we started out using mostly Ephemeral storage 
in our builds and looking back I wish we hadn't. Note we're using Ceph 
as a backend so my response is tailored towards Ceph's behavior.


The major pain point is snapshots. When you snapshot an nova volume an 
RBD snapshot occurs and is very quick and uses very little additional 
storage, however the snapshot is then copied into the images pool and in 
the process is converted from a snapshot to a full size image. This 
takes a long time because you have to copy a lot of data and it takes up 
a lot of space. It also causes a great deal of IO on the storage and 
means you end up with a bunch of "snapshot images" creating clutter. On 
the other hand volume snapshots are near instantaneous without the other 
drawbacks I've mentioned.


On the plus side for ephemeral storage; resizing the root disk of images 
works better. As long as your image is configured properly it's just a 
matter of initiating a resize and letting the instance reboot to grow 
the root disk. When using volumes as your root disk you instead have to 
shutdown the instance, grow the volume and boot.


I hope this help! If anyone on the list knows something I don't know 
regarding these issues please chime in. I'd love to know if there's a 
better way.


I'd just like to point out that the above is exactly the right way to 
think about things.


Don't boot from volume (i.e. don't use a volume as your root disk).

Instead, separate the operating system from your application data. Put 
the operating system on a small disk image (small == fast boot times), 
use a config drive for injectable configuration and create Cinder 
volumes for your application data.


Detach and attach the application data Cinder volume as needed to your 
server instance. Make your life easier by not coupling application data 
and the operating system together.


Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Sean McGinnis
One other thing to think about - I think at least starting with the Mitaka
release, we added a feature called image volume cache. So if you create a
boot volume, the first time you do so it takes some time as the image is
pulled down and written to the backend volume.

With image volume cache enabled, that still happens on the first volume
creation of the image. But then any subsequent volume creations on that
backend for that image will be much, much faster.

This is something that needs to be configured. Details can be found here:

https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html

Sean

On Tue, Aug 01, 2017 at 10:47:26AM -0500, Sean McGinnis wrote:
> On Tue, Aug 01, 2017 at 11:14:03AM -0400, John Petrini wrote:
> > 
> > On the plus side for ephemeral storage; resizing the root disk of images
> > works better. As long as your image is configured properly it's just a
> > matter of initiating a resize and letting the instance reboot to grow the
> > root disk. When using volumes as your root disk you instead have to
> > shutdown the instance, grow the volume and boot.
> > 
> 
> Some sort of good news there. Starting with the Pike release, you will now
> be able to extend an attached volume. As long as both Cinder and Nova are
> at Pike or later, this should now be allowed.
> 
> Sean
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Sean McGinnis
On Tue, Aug 01, 2017 at 11:14:03AM -0400, John Petrini wrote:
> 
> On the plus side for ephemeral storage; resizing the root disk of images
> works better. As long as your image is configured properly it's just a
> matter of initiating a resize and letting the instance reboot to grow the
> root disk. When using volumes as your root disk you instead have to
> shutdown the instance, grow the volume and boot.
> 

Some sort of good news there. Starting with the Pike release, you will now
be able to extend an attached volume. As long as both Cinder and Nova are
at Pike or later, this should now be allowed.

Sean

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Smith
At Overstock we do both, in different clouds.  Our preferred option is a Ceph 
backend for Nova ephemeral storage.  We like it because it is fast to boot and 
makes resize easy.  Our use case doesn’t require snapshots nor do we have a 
need for keeping the data around if a server needs to be rebuilt.   It may not 
work for other people, but it works well for us.

In some of our other clouds, where we don’t have Ceph available, we do use 
Cinder volumes for booting VMs off of backend SAN services.  It works ok, but 
there are a few painpoints in regard to disk resizing - it’s a bit of a 
cumbersome process compared the experience with Nova ephemeral.  Depending on 
the solution used, creating the volume for boot can take much much longer and 
that can be annoying.   On the plus side, Cinder does allow you to do QOS to 
limit I/O, whereas I do not believe that’s an option with Nova ephemeral.  And, 
again depending on the Cinder solution employed, the disk I/O for this kind of 
setup can be significantly better than some other options including Nova 
ephemeral with a Ceph backend.

Bottom line:  it depends what you need, as both options work well and there are 
people doing both out there in the wild.

Good luck!


On Aug 1, 2017, at 9:14 AM, John Petrini 
> wrote:

Just my two cents here but we started out using mostly Ephemeral storage in our 
builds and looking back I wish we hadn't. Note we're using Ceph as a backend so 
my response is tailored towards Ceph's behavior.

The major pain point is snapshots. When you snapshot an nova volume an RBD 
snapshot occurs and is very quick and uses very little additional storage, 
however the snapshot is then copied into the images pool and in the process is 
converted from a snapshot to a full size image. This takes a long time because 
you have to copy a lot of data and it takes up a lot of space. It also causes a 
great deal of IO on the storage and means you end up with a bunch of "snapshot 
images" creating clutter. On the other hand volume snapshots are near 
instantaneous without the other drawbacks I've mentioned.

On the plus side for ephemeral storage; resizing the root disk of images works 
better. As long as your image is configured properly it's just a matter of 
initiating a resize and letting the instance reboot to grow the root disk. When 
using volumes as your root disk you instead have to shutdown the instance, grow 
the volume and boot.

I hope this help! If anyone on the list knows something I don't know regarding 
these issues please chime in. I'd love to know if there's a better way.

Regards,

John Petrini



On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad 
> wrote:
In our process of standing up an OpenStack internal cloud we are facing the 
question of ephemeral storage vs. Cinder volumes for instance root disks.

As I look at public clouds such as AWS and Azure, the norm is to use persistent 
volumes for the root disk.  AWS started out with images booting onto ephemeral 
disk, but soon after they released Elastic Block Storage and ever since the 
clear trend has been to EBS-backed instances, and now when I look at their 
quick-start list of 33 AMIs, all of them are EBS-backed.  And I’m not even sure 
one can have anything except persistent root disks in Azure VMs.

Based on this and a number of other factors I think we want our user normal / 
default behavior to boot onto Cinder-backed volumes instead of onto ephemeral 
storage.  But then I look at OpenStack and its design point appears to be 
booting images onto ephemeral storage, and while it is possible to boot an 
image onto a new volume this is clumsy (haven’t found a way to make this the 
default behavior) and we are experiencing performance problems (that admittedly 
we have not yet run to ground).

So …

• Are other operators routinely booting onto Cinder volumes instead of 
ephemeral storage?

• What has been your experience with this; any advice?

Conrad Kimball
Associate Technical Fellow
Chief Architect, Enterprise Cloud Services
Application Infrastructure Services / Global IT Infrastructure / Information 
Technology & Data Analytics
conrad.kimb...@boeing.com
P.O. Box 3707, Mail Code 7M-TE
Seattle, WA  98124-2207
Bellevue 33-11 bldg, office 3A6-3.9
Mobile:  425-591-7802


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




CONFIDENTIALITY 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Jonathan Proulx
Hi Conrad,

We boot to ephemeral disk by default but our ephemeral disk is Ceph
RBD just like out cinder volumes.

Using Ceph for Cinder Volumes and Glance Images storage it is possible
to very quickly create new Persistent Volumes from Glance Images
becasue on the backend it's just a CoW snapshot operation (even though
we use seperate pools for ephemeral disk, persistent volumes, and
images). This is also what happens for ephemeral boting which is much
faster than copying image to local disk on hypervisor first so we get
quick starts and relatively easy live migrations (which we use for
maintenance like hypervisor reboots and reinstalls).

I don't know how to make it the "default" but ceph definately makes
it faster. Other backends I've used basicly mount the raw storage
volume download the imaage then 'dd' in into place which is
painfully slow.

As to why ephemeral rather than volume backed by default it's much
easier to boot amny copies of the same thing and be sure they're the
same using ephemeral storage and iamges or snapshots.  Volume backed
instances tend to drift.

That said workign in a research lab many of my users go for the more
"Pet" like persistent VM workflow.  We just manage it with docs and
education, though there is always someone who misses the red flashing
"ephemeral means it gets deleted when you turn it off" sign and is
sad.

-Jon

On Tue, Aug 01, 2017 at 02:50:45PM +, Kimball, Conrad wrote:
:In our process of standing up an OpenStack internal cloud we are facing the 
question of ephemeral storage vs. Cinder volumes for instance root disks.
:
:As I look at public clouds such as AWS and Azure, the norm is to use 
persistent volumes for the root disk.  AWS started out with images booting onto 
ephemeral disk, but soon after they released Elastic Block Storage and ever 
since the clear trend has been to EBS-backed instances, and now when I look at 
their quick-start list of 33 AMIs, all of them are EBS-backed.  And I'm not 
even sure one can have anything except persistent root disks in Azure VMs.
:
:Based on this and a number of other factors I think we want our user normal / 
default behavior to boot onto Cinder-backed volumes instead of onto ephemeral 
storage.  But then I look at OpenStack and its design point appears to be 
booting images onto ephemeral storage, and while it is possible to boot an 
image onto a new volume this is clumsy (haven't found a way to make this the 
default behavior) and we are experiencing performance problems (that admittedly 
we have not yet run to ground).
:
:So ...
:
:* Are other operators routinely booting onto Cinder volumes instead of 
ephemeral storage?
:
:* What has been your experience with this; any advice?
:
:Conrad Kimball
:Associate Technical Fellow
:Chief Architect, Enterprise Cloud Services
:Application Infrastructure Services / Global IT Infrastructure / Information 
Technology & Data Analytics
:conrad.kimb...@boeing.com
:P.O. Box 3707, Mail Code 7M-TE
:Seattle, WA  98124-2207
:Bellevue 33-11 bldg, office 3A6-3.9
:Mobile:  425-591-7802
:

:___
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-- 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Chris Friesen

On 08/01/2017 08:50 AM, Kimball, Conrad wrote:


·Are other operators routinely booting onto Cinder volumes instead of ephemeral
storage?


It's up to the end-user, but yes.


·What has been your experience with this; any advice?


It works fine.  With Horizon you can do it in one step (select the image but 
tell it to boot from volume) but with the CLI I think you need two steps (make 
the volume from the image, then boot from the volume).  The extra steps are a 
moot point if you are booting programmatically (from a custom script or 
something like heat).


I think that generally speaking the default is to use ephemeral storage because 
it's:


a) cheaper
b) "cloudy" in that if anything goes wrong you just spin up another instance

On the other hand, booting from volume does allow for faster migrations since it 
avoids the need to transfer the boot disk contents as part of the migration.


Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Kimball, Conrad
In our process of standing up an OpenStack internal cloud we are facing the 
question of ephemeral storage vs. Cinder volumes for instance root disks.

As I look at public clouds such as AWS and Azure, the norm is to use persistent 
volumes for the root disk.  AWS started out with images booting onto ephemeral 
disk, but soon after they released Elastic Block Storage and ever since the 
clear trend has been to EBS-backed instances, and now when I look at their 
quick-start list of 33 AMIs, all of them are EBS-backed.  And I'm not even sure 
one can have anything except persistent root disks in Azure VMs.

Based on this and a number of other factors I think we want our user normal / 
default behavior to boot onto Cinder-backed volumes instead of onto ephemeral 
storage.  But then I look at OpenStack and its design point appears to be 
booting images onto ephemeral storage, and while it is possible to boot an 
image onto a new volume this is clumsy (haven't found a way to make this the 
default behavior) and we are experiencing performance problems (that admittedly 
we have not yet run to ground).

So ...

* Are other operators routinely booting onto Cinder volumes instead of 
ephemeral storage?

* What has been your experience with this; any advice?

Conrad Kimball
Associate Technical Fellow
Chief Architect, Enterprise Cloud Services
Application Infrastructure Services / Global IT Infrastructure / Information 
Technology & Data Analytics
conrad.kimb...@boeing.com
P.O. Box 3707, Mail Code 7M-TE
Seattle, WA  98124-2207
Bellevue 33-11 bldg, office 3A6-3.9
Mobile:  425-591-7802

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Paras pradhan
I ran virt-sparsify. So before running virt-sparsify it was 6.0GB and after
it is 1.3GB.

Thanks
Paras.

On Tue, Aug 1, 2017 at 2:53 AM, Tomáš Vondra  wrote:

> Hi!
>
> How big are the actual image files? Because qcow2 is a sparse format, it
> does not store zeroes. If the free space in one image is zeroed out, it
> will convert much faster. If that is the problem, use „dd if=/dev/zero
> of=temp;sync;rm temp“ or zerofree.
>
> Tomas
>
>
>
> *From:* Paras pradhan [mailto:pradhanpa...@gmail.com]
> *Sent:* Monday, July 31, 2017 11:54 PM
> *To:* openstack-operators@lists.openstack.org
> *Subject:* [Openstack-operators] custom build image is slow
>
>
>
> Hello
>
>
>
> I have two qcow2 images uploaded to glance. One is CentOS 7 cloud image
> downloaded from centos.org.  The other one is custom built using CentOS
> 7.DVD.  When I create cinder volumes from them, volume creation from the
> custom built image it is very very slow.
>
>
>
>
>
> CenOS qcow2:
>
>
>
> 2017-07-31 21:42:44.287 881609 INFO cinder.image.image_utils
> [req-ea2d7b12-ae9e-45b2-8b4b-ea8465497d5a e090e605170a778610438bfabad7aa
> 7764d0a77ef520ae392e2b59074c9f88cf 490910c1d4e1486d8e3a62d7c0ae698e -
> d67a18e70dd9467db25b74d33feaad6d default] *Converted 8192.00 MB image at
> 253.19 MB/s*
>
>
>
> Custom built qcow2:
>
> INFO cinder.image.image_utils [req-032292d8-1500-474d-95c7-2e8424e2b864
> e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
> 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
> default] *Converted 10240.00 MB image at 32.22 MB/s*
>
>
>
> I used the following command to create the qcow2 file
>
> qemu-img create -f qcow2 custom.qcow2 10G
>
>
>
> What am I missing ?
>
>
>
> Thanks
> Paras.
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [FEMDC] IRC meeting Tomorrow 15:00 UTC

2017-08-01 Thread lebre . adrien
Dear all, 

As usual, the agenda is available at: 
https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 (line 
991)
Please feel free to add items.

Best,
Ad_rien_
PS: Paul-André will chair the meeting (I'm taking some holidays ;))

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [publiccloud-wg] Reminder meeting PublicCloudWorkingGroup

2017-08-01 Thread Tobias Rydberg

Hi everyone,

Don't forget tomorrows meeting for the PublicCloudWorkingGroup. A lot of 
important stuff to chat about =)

1400 UTC  in IRC channel #openstack-meeting-3

Etherpad: https://etherpad.openstack.org/p/publiccloud-wg

Regards,
Tobias Rydberg


smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Tomáš Vondra
Hi!

How big are the actual image files? Because qcow2 is a sparse format, it does 
not store zeroes. If the free space in one image is zeroed out, it will convert 
much faster. If that is the problem, use „dd if=/dev/zero of=temp;sync;rm temp“ 
or zerofree.

Tomas

 

From: Paras pradhan [mailto:pradhanpa...@gmail.com] 
Sent: Monday, July 31, 2017 11:54 PM
To: openstack-operators@lists.openstack.org
Subject: [Openstack-operators] custom build image is slow

 

Hello

 

I have two qcow2 images uploaded to glance. One is CentOS 7 cloud image 
downloaded from centos.org.  The other one is custom built using CentOS 7.DVD.  
When I create cinder volumes from them, volume creation from the custom built 
image it is very very slow.

 

 

CenOS qcow2:

 

2017-07-31 21:42:44.287 881609 INFO cinder.image.image_utils 
[req-ea2d7b12-ae9e-45b2-8b4b-ea8465497d5a 
e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf 
490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d default] 
Converted 8192.00 MB image at 253.19 MB/s

 

Custom built qcow2:

INFO cinder.image.image_utils [req-032292d8-1500-474d-95c7-2e8424e2b864 
e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf 
490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d default] 
Converted 10240.00 MB image at 32.22 MB/s

 

I used the following command to create the qcow2 file

qemu-img create -f qcow2 custom.qcow2 10G

 

What am I missing ?

 

Thanks
Paras.

 

 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators