On Mon, Mar 7, 2016 at 12:33 AM, Tim Bell wrote:
> From: joe
> Date: Monday 7 March 2016 at 07:53
> To: openstack-operators
> Subject: Re: [Openstack-operators] RAID / stripe block storage volumes
>
> We ($work) have been researching this topic for the past few weeks and I
&
break the fourth wall on users
knowing what hypervisor is hosting their instance.
From: ziopr...@gmail.com
Subject: Re: [Openstack-operators] RAID / stripe block storage volumes
> In our environments, we offer two types of storage. Tenants can either use
> Ceph/RBD and trade speed/latency for
From: joe mailto:j...@topjian.net>>
Date: Monday 7 March 2016 at 07:53
To: openstack-operators
mailto:openstack-operators@lists.openstack.org>>
Subject: Re: [Openstack-operators] RAID / stripe block storage volumes
We ($work) have been researching this topic for the past few weeks
t;
>>>>> R
>>>>>
>>>>> On Mon, Feb 8, 2016 at 3:51 PM, Fox, Kevin M
>>>>> wrote:
>>>>>
>>>>>> We've used ceph to address the storage requirement in small clouds
>>>>>> pretty well.
> In our environments, we offer two types of storage. Tenants can either use
> Ceph/RBD and trade speed/latency for reliability and protection against
> physical disk failures, or they can launch instances that are realized as
> LVs on an LVM VG that we create on top of a RAID 0 spanning all but th
nodes with
>>>>> replication set to 2, and because of the radosgw, you can share your small
>>>>> amount of storage between the object store and the block store avoiding
>>>>> the
>>>>> need to overprovision swift-only or cinder-only to
age unknowns.
>>>> Its just one pool of storage.
>>>>
>>>> Your right, using lvm is like telling your users, don't do pets, but
>>>> then having pets at the heart of your system. when you loose one, you loose
>>>> a lot. With a small ceph, y
gt;>> Its just one pool of storage.
>>>
>>> Your right, using lvm is like telling your users, don't do pets, but
>>> then having pets at the heart of your system. when you loose one, you loose
>>> a lot. With a small ceph, you can take out o
r system. when you loose one, you loose a
>> lot. With a small ceph, you can take out one of the nodes, burn it to the
>> ground and put it back, and it just works. No pets.
>>
>> Do consider ceph for the small use case.
>>
>> Thanks,
>> Kevin
>&g
Kevin
>
> --
> *From:* Robert Starmer [rob...@kumul.us]
> *Sent:* Monday, February 08, 2016 1:30 PM
> *To:* Ned Rhudy
> *Cc:* OpenStack Operators
>
> *Subject:* Re: [Openstack-operators] RAID / stripe block storage volumes
>
> Ned's m
ubject: Re: [Openstack-operators] RAID / stripe block storage volumes
Ned's model is the model I meant by "multiple underlying storage services".
Most of the systems I've built are LV/LVM only, a few added Ceph as an
alternative/live-migration option, and one where we used Gluste
ing able to forgo reliability or speed as their circumstances demand.
>
> From: j...@topjian.net
> Subject: Re: [Openstack-operators] RAID / stripe block storage volumes
>
> Hi Robert,
>
> Can you elaborate on "multiple underlying storage services"?
>
> The rea
rther changes to our approach that we would like to make down
the road, but in general our users seem to like the current system and being
able to forgo reliability or speed as their circumstances demand.
From: j...@topjian.net
Subject: Re: [Openstack-operators] RAID / stripe block storage volume
Hi Robert,
Can you elaborate on "multiple underlying storage services"?
The reason I asked the initial question is because historically we've made
our block storage service resilient to failure. Historically we also made
our compute environment resilient to failure, too, but over time, we've
seen
I've always recommended providing multiple underlying storage services to
provide this rather than adding the overhead to the VM. So, not in any of
my systems or any I've worked with.
R
On Fri, Feb 5, 2016 at 5:56 PM, Joe Topjian wrote:
> Hello,
>
> Does anyone have users RAID'ing or stripin
Hello,
Does anyone have users RAID'ing or striping multiple block storage volumes
from within an instance?
If so, what was the experience? Good, bad, possible but with caveats?
Thanks,
Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.
16 matches
Mail list logo