Cristian and everyone else have expertly responded to the SSD capabilities,
pros, and cons so I'll ignore that. I believe you were saying that it was
risky to swap out your existing journals to a new journal device. That is
actually a very simple operation that can be scripted to only take minutes
per node with no risk to data.

You just stop the osd, flush the journal, delete the old journal partition,
create the new partition with the same guid, initialize the journal, and
start the osd.

On Wed, Jun 21, 2017, 8:44 PM Brady Deetz <bde...@gmail.com> wrote:

> Hello,
> I'm expanding my 288 OSD, primarily cephfs, cluster by about 16%. I have
> 12 osd nodes with 24 osds each. Each osd node has 2 P3700 400GB NVMe PCIe
> drives providing 10GB journals for groups of 12 6TB spinning rust drives
> and 2x lacp 40gbps ethernet.
>
> Our hardware provider is recommending that we start deploying P4600 drives
> in place of our P3700s due to availability.
>
> I've seen some talk on here regarding this, but wanted to throw an idea
> around. I was okay throwing away 280GB of fast capacity for the purpose of
> providing reliable journals. But with as much free capacity as we'd have
> with a 4600, maybe I could use that extra capacity as a cache tier for
> writes on an rbd ec pool. If I wanted to go that route, I'd probably
> replace several existing 3700s with 4600s to get additional cache capacity.
> But, that sounds risky...
>
> What do you guys think?
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to