Re: [ceph-users] Fixing a crushmap
Here was the process I went through. 1) I created an EC pool which created ruleset 1 2) I edited the crushmap to approximately its current form 3) I discovered my previous EC pool wasn't doing what I meant for it to do, so I deleted it. 4) I created a new EC pool with the parameters I wanted and told it to use ruleset 3 On Fri, Feb 20, 2015 at 10:55 AM, Luis Periquito periqu...@gmail.com wrote: The process of creating an erasure coded pool and a replicated one is slightly different. You can use Sebastian's guide to create/manage the osd tree, but you should follow this guide http://ceph.com/docs/giant/dev/erasure-coded-pool/ to create the EC pool. I'm not sure (i.e. I never tried) to create a EC pool the way you did. The normal replicated ones do work like this. On Fri, Feb 20, 2015 at 4:49 PM, Kyle Hutson kylehut...@ksu.edu wrote: I manually edited my crushmap, basing my changes on http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ I have SSDs and HDDs in the same box and was wanting to separate them by ruleset. My current crushmap can be seen at http://pastie.org/9966238 I had it installed and everything looked gooduntil I created a new pool. All of the new pgs are stuck in creating. I first tried creating an erasure-coded pool using ruleset 3, then created another pool using ruleset 0. Same result. I'm not opposed to an 'RTFM' answer, so long as you can point me to the right one. I've seen very little documentation on crushmap rules, in particular. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Fixing a crushmap
Oh, and I don't yet have any important data here, so I'm not worried about losing anything at this point. I just need to get my cluster happy again so I can play with it some more. On Fri, Feb 20, 2015 at 11:00 AM, Kyle Hutson kylehut...@ksu.edu wrote: Here was the process I went through. 1) I created an EC pool which created ruleset 1 2) I edited the crushmap to approximately its current form 3) I discovered my previous EC pool wasn't doing what I meant for it to do, so I deleted it. 4) I created a new EC pool with the parameters I wanted and told it to use ruleset 3 On Fri, Feb 20, 2015 at 10:55 AM, Luis Periquito periqu...@gmail.com wrote: The process of creating an erasure coded pool and a replicated one is slightly different. You can use Sebastian's guide to create/manage the osd tree, but you should follow this guide http://ceph.com/docs/giant/dev/erasure-coded-pool/ to create the EC pool. I'm not sure (i.e. I never tried) to create a EC pool the way you did. The normal replicated ones do work like this. On Fri, Feb 20, 2015 at 4:49 PM, Kyle Hutson kylehut...@ksu.edu wrote: I manually edited my crushmap, basing my changes on http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ I have SSDs and HDDs in the same box and was wanting to separate them by ruleset. My current crushmap can be seen at http://pastie.org/9966238 I had it installed and everything looked gooduntil I created a new pool. All of the new pgs are stuck in creating. I first tried creating an erasure-coded pool using ruleset 3, then created another pool using ruleset 0. Same result. I'm not opposed to an 'RTFM' answer, so long as you can point me to the right one. I've seen very little documentation on crushmap rules, in particular. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Fixing a crushmap
I manually edited my crushmap, basing my changes on http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ I have SSDs and HDDs in the same box and was wanting to separate them by ruleset. My current crushmap can be seen at http://pastie.org/9966238 I had it installed and everything looked gooduntil I created a new pool. All of the new pgs are stuck in creating. I first tried creating an erasure-coded pool using ruleset 3, then created another pool using ruleset 0. Same result. I'm not opposed to an 'RTFM' answer, so long as you can point me to the right one. I've seen very little documentation on crushmap rules, in particular. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Fixing a crushmap
The process of creating an erasure coded pool and a replicated one is slightly different. You can use Sebastian's guide to create/manage the osd tree, but you should follow this guide http://ceph.com/docs/giant/dev/erasure-coded-pool/ to create the EC pool. I'm not sure (i.e. I never tried) to create a EC pool the way you did. The normal replicated ones do work like this. On Fri, Feb 20, 2015 at 4:49 PM, Kyle Hutson kylehut...@ksu.edu wrote: I manually edited my crushmap, basing my changes on http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ I have SSDs and HDDs in the same box and was wanting to separate them by ruleset. My current crushmap can be seen at http://pastie.org/9966238 I had it installed and everything looked gooduntil I created a new pool. All of the new pgs are stuck in creating. I first tried creating an erasure-coded pool using ruleset 3, then created another pool using ruleset 0. Same result. I'm not opposed to an 'RTFM' answer, so long as you can point me to the right one. I've seen very little documentation on crushmap rules, in particular. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com