Here was the process I went through.
1) I created an EC pool which created ruleset 1
2) I edited the crushmap to approximately its current form
3) I discovered my previous EC pool wasn't doing what I meant for it to do,
so I deleted it.
4) I created a new EC pool with the parameters I wanted and told it to use
ruleset 3

On Fri, Feb 20, 2015 at 10:55 AM, Luis Periquito <periqu...@gmail.com>
wrote:

> The process of creating an erasure coded pool and a replicated one is
> slightly different. You can use Sebastian's guide to create/manage the osd
> tree, but you should follow this guide
> http://ceph.com/docs/giant/dev/erasure-coded-pool/ to create the EC pool.
>
> I'm not sure (i.e. I never tried) to create a EC pool the way you did. The
> normal replicated ones do work like this.
>
> On Fri, Feb 20, 2015 at 4:49 PM, Kyle Hutson <kylehut...@ksu.edu> wrote:
>
>> I manually edited my crushmap, basing my changes on
>> http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
>> I have SSDs and HDDs in the same box and was wanting to separate them by
>> ruleset. My current crushmap can be seen at http://pastie.org/9966238
>>
>> I had it installed and everything looked good....until I created a new
>> pool. All of the new pgs are stuck in "creating". I first tried creating an
>> erasure-coded pool using ruleset 3, then created another pool using ruleset
>> 0. Same result.
>>
>> I'm not opposed to an 'RTFM' answer, so long as you can point me to the
>> right one. I've seen very little documentation on crushmap rules, in
>> particular.
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to