Hello Conrad,
I jump late on the conversation because I was away from the mailing
lists last week.
We run Openstack with both nova ephemeral root disks and cinder volume
boot disks. Both are with ceph rbd backend. It is the user that flags
"boot from volume" in Horizon when starting an instance.
I totally agree with Jay, this is the best, cheapest and most scalable way to
build a cloud environment with Openstack.
We use local storage as the primary root disk source which lets us make good
use of the slots available in each compute node (6), and coupled with the
Raid10 gives good I/O pe
>>> Mike Smith
>>On the plus side, Cinder does allow you to do QOS to limit I/O, whereas I do
>>not believe that’s an option with Nova ephemeral.
You can specify the IOPS limits in the flavor.
Drawbacks:
* You might end up with a lot of different flavors because of IOPS requirements
* Modifying
Thanks for the info. Might have something to do with the Ceph version then.
We're running hammer and apparently the du option wasn't added until
in Infernalis.
John Petrini
On Tue, Aug 1, 2017 at 4:32 PM, Mike Lowe wrote:
> Two things, first info does not show how much disk is used du does.
> S
On 08/01/2017 02:32 PM, Mike Lowe wrote:
Two things, first info does not show how much disk is used du does. Second, the
semantics count, copy is different than clone and flatten. Clone and flatten
which should happen if you have things working correctly is much faster than
copy. If you are us
Two things, first info does not show how much disk is used du does. Second,
the semantics count, copy is different than clone and flatten. Clone and
flatten which should happen if you have things working correctly is much faster
than copy. If you are using copy then you may be limited by the
On 8/1/2017 10:47 AM, Sean McGinnis wrote:
Some sort of good news there. Starting with the Pike release, you will now
be able to extend an attached volume. As long as both Cinder and Nova are
at Pike or later, this should now be allowed.
And you're using the libvirt compute driver in Nova, and
Maybe I'm just not understanding but when I create a nova snapshot the
snapshot happens at RBD in the ephemeral pool and then it's copied to the
images pool. This results in a full sized image rather than a snapshot with
a reference to the parent.
For example below is a snapshot of an ephemeral in
There is no upload if you use Ceph to back your glance (like you should), the
snapshot is cloned from the ephemeral pool into the the images pool, then
flatten is run as a background task. Net result is that creating a 120GB image
vs 8GB is slightly faster on my cloud but not at all what I’d ca
Yes from Mitaka onward the snapshot happens at the RBD level which is fast.
It's the flattening and uploading of the image to glance that's the major
pain point. Still it's worlds better than the qemu snapshots to the local
disk prior to Mitaka.
John Petrini
Platforms Engineer // *CoreDial, L
Strictly speaking I don’t think this is the case anymore for Mitaka or later.
Snapping nova does take more space as the image is flattened, but the dumb
download then upload back into ceph has been cut out. With careful attention
paid to discard/TRIM I believe you can maintain the thin provisi
>
> >·What has been your experience with this; any advice?
>
> It works fine. With Horizon you can do it in one step (select the image but
> tell it to boot from volume) but with the CLI I think you need two steps
> (make the volume from the image, then boot from the volume). The extra
> steps
On 08/01/2017 11:14 AM, John Petrini wrote:
Just my two cents here but we started out using mostly Ephemeral storage
in our builds and looking back I wish we hadn't. Note we're using Ceph
as a backend so my response is tailored towards Ceph's behavior.
The major pain point is snapshots. When y
One other thing to think about - I think at least starting with the Mitaka
release, we added a feature called image volume cache. So if you create a
boot volume, the first time you do so it takes some time as the image is
pulled down and written to the backend volume.
With image volume cache enabl
On Tue, Aug 01, 2017 at 11:14:03AM -0400, John Petrini wrote:
>
> On the plus side for ephemeral storage; resizing the root disk of images
> works better. As long as your image is configured properly it's just a
> matter of initiating a resize and letting the instance reboot to grow the
> root dis
At Overstock we do both, in different clouds. Our preferred option is a Ceph
backend for Nova ephemeral storage. We like it because it is fast to boot and
makes resize easy. Our use case doesn’t require snapshots nor do we have a
need for keeping the data around if a server needs to be rebuil
Hi Conrad,
We boot to ephemeral disk by default but our ephemeral disk is Ceph
RBD just like out cinder volumes.
Using Ceph for Cinder Volumes and Glance Images storage it is possible
to very quickly create new Persistent Volumes from Glance Images
becasue on the backend it's just a CoW snapshot
Just my two cents here but we started out using mostly Ephemeral storage in
our builds and looking back I wish we hadn't. Note we're using Ceph as a
backend so my response is tailored towards Ceph's behavior.
The major pain point is snapshots. When you snapshot an nova volume an RBD
snapshot occur
On 08/01/2017 08:50 AM, Kimball, Conrad wrote:
·Are other operators routinely booting onto Cinder volumes instead of ephemeral
storage?
It's up to the end-user, but yes.
·What has been your experience with this; any advice?
It works fine. With Horizon you can do it in one step (select the
In our process of standing up an OpenStack internal cloud we are facing the
question of ephemeral storage vs. Cinder volumes for instance root disks.
As I look at public clouds such as AWS and Azure, the norm is to use persistent
volumes for the root disk. AWS started out with images booting on
20 matches
Mail list logo