>>> Mike Smith
>>On the plus side, Cinder does allow you to do QOS to limit I/O, whereas I do
>>not believe that’s an option with Nova ephemeral.
You can specify the IOPS limits in the flavor.
Drawbacks:
* You might end up with a lot of different flavors because of IOPS requirements
* Modifying
how do we install virtio drivers if its missing? How do I verify it on the
centos cloud image if its there?
On Tue, Aug 1, 2017 at 12:19 PM, Abel Lopez wrote:
> Your custom image is likely missing the virtIO drivers that the cloud
> image has.
>
> Instead of running through the DVD installer, I'
Thanks for the info. Might have something to do with the Ceph version then.
We're running hammer and apparently the du option wasn't added until
in Infernalis.
John Petrini
On Tue, Aug 1, 2017 at 4:32 PM, Mike Lowe wrote:
> Two things, first info does not show how much disk is used du does.
> S
On 08/01/2017 02:32 PM, Mike Lowe wrote:
Two things, first info does not show how much disk is used du does. Second, the
semantics count, copy is different than clone and flatten. Clone and flatten
which should happen if you have things working correctly is much faster than
copy. If you are us
Dear All,
The ops meet ups team today confirmed the NTT proposal to host the 1st
2018 OpenStack Operators mid-cycle meet up in Tokyo March 7th,8th. As a
reminder, that is as proposed here:
https://etherpad.openstack.org/p/ops-meetup-venue-discuss-1st-2018
The vote was unanimous, this proposal l
Two things, first info does not show how much disk is used du does. Second,
the semantics count, copy is different than clone and flatten. Clone and
flatten which should happen if you have things working correctly is much faster
than copy. If you are using copy then you may be limited by the
On 8/1/2017 10:47 AM, Sean McGinnis wrote:
Some sort of good news there. Starting with the Pike release, you will now
be able to extend an attached volume. As long as both Cinder and Nova are
at Pike or later, this should now be allowed.
And you're using the libvirt compute driver in Nova, and
Maybe I'm just not understanding but when I create a nova snapshot the
snapshot happens at RBD in the ephemeral pool and then it's copied to the
images pool. This results in a full sized image rather than a snapshot with
a reference to the parent.
For example below is a snapshot of an ephemeral in
There is no upload if you use Ceph to back your glance (like you should), the
snapshot is cloned from the ephemeral pool into the the images pool, then
flatten is run as a background task. Net result is that creating a 120GB image
vs 8GB is slightly faster on my cloud but not at all what I’d ca
Yes from Mitaka onward the snapshot happens at the RBD level which is fast.
It's the flattening and uploading of the image to glance that's the major
pain point. Still it's worlds better than the qemu snapshots to the local
disk prior to Mitaka.
John Petrini
Platforms Engineer // *CoreDial, L
Strictly speaking I don’t think this is the case anymore for Mitaka or later.
Snapping nova does take more space as the image is flattened, but the dumb
download then upload back into ceph has been cut out. With careful attention
paid to discard/TRIM I believe you can maintain the thin provisi
It's an unusual request but we need a new PTL for Queens.
Alex Schultz and I have been leading Puppet OpenStack modules for some
time now and it's time to rotate.
We know you're out there consuming (and contributing) to the modules -
if you want this project to survive, it's time to step-up and giv
Your custom image is likely missing the virtIO drivers that the cloud image
has.
Instead of running through the DVD installer, I'd suggest checking out
diskimage-builder to make custom images for use on Openstack.
On Tue, Aug 1, 2017 at 10:16 AM Paras pradhan
wrote:
> Also this is what I've not
Also this is what I've noticed with the centos cloud image I downloaded. If
I add few packages( around a GB), the sizes goes up to 8GB from 1.3 GB.
Running dd if=/dev/zero of=temp;sync;rm temp failed due to the size of the
disk on the cloud images which is 10G. zeroing failed with No space left on
>
> >·What has been your experience with this; any advice?
>
> It works fine. With Horizon you can do it in one step (select the image but
> tell it to boot from volume) but with the CLI I think you need two steps
> (make the volume from the image, then boot from the volume). The extra
> steps
On 08/01/2017 11:14 AM, John Petrini wrote:
Just my two cents here but we started out using mostly Ephemeral storage
in our builds and looking back I wish we hadn't. Note we're using Ceph
as a backend so my response is tailored towards Ceph's behavior.
The major pain point is snapshots. When y
One other thing to think about - I think at least starting with the Mitaka
release, we added a feature called image volume cache. So if you create a
boot volume, the first time you do so it takes some time as the image is
pulled down and written to the backend volume.
With image volume cache enabl
On Tue, Aug 01, 2017 at 11:14:03AM -0400, John Petrini wrote:
>
> On the plus side for ephemeral storage; resizing the root disk of images
> works better. As long as your image is configured properly it's just a
> matter of initiating a resize and letting the instance reboot to grow the
> root dis
At Overstock we do both, in different clouds. Our preferred option is a Ceph
backend for Nova ephemeral storage. We like it because it is fast to boot and
makes resize easy. Our use case doesn’t require snapshots nor do we have a
need for keeping the data around if a server needs to be rebuil
Hi Conrad,
We boot to ephemeral disk by default but our ephemeral disk is Ceph
RBD just like out cinder volumes.
Using Ceph for Cinder Volumes and Glance Images storage it is possible
to very quickly create new Persistent Volumes from Glance Images
becasue on the backend it's just a CoW snapshot
Just my two cents here but we started out using mostly Ephemeral storage in
our builds and looking back I wish we hadn't. Note we're using Ceph as a
backend so my response is tailored towards Ceph's behavior.
The major pain point is snapshots. When you snapshot an nova volume an RBD
snapshot occur
On 08/01/2017 08:50 AM, Kimball, Conrad wrote:
·Are other operators routinely booting onto Cinder volumes instead of ephemeral
storage?
It's up to the end-user, but yes.
·What has been your experience with this; any advice?
It works fine. With Horizon you can do it in one step (select the
In our process of standing up an OpenStack internal cloud we are facing the
question of ephemeral storage vs. Cinder volumes for instance root disks.
As I look at public clouds such as AWS and Azure, the norm is to use persistent
volumes for the root disk. AWS started out with images booting on
I ran virt-sparsify. So before running virt-sparsify it was 6.0GB and after
it is 1.3GB.
Thanks
Paras.
On Tue, Aug 1, 2017 at 2:53 AM, Tomáš Vondra wrote:
> Hi!
>
> How big are the actual image files? Because qcow2 is a sparse format, it
> does not store zeroes. If the free space in one image i
Dear all,
As usual, the agenda is available at:
https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 (line
991)
Please feel free to add items.
Best,
Ad_rien_
PS: Paul-André will chair the meeting (I'm taking some holidays ;))
___
Hi everyone,
Don't forget tomorrows meeting for the PublicCloudWorkingGroup. A lot of
important stuff to chat about =)
1400 UTC in IRC channel #openstack-meeting-3
Etherpad: https://etherpad.openstack.org/p/publiccloud-wg
Regards,
Tobias Rydberg
smime.p7s
Description: S/MIME Cryptographic
Hi!
How big are the actual image files? Because qcow2 is a sparse format, it does
not store zeroes. If the free space in one image is zeroed out, it will convert
much faster. If that is the problem, use „dd if=/dev/zero of=temp;sync;rm temp“
or zerofree.
Tomas
From: Paras pradhan [mailto:p
27 matches
Mail list logo