On Wed, Aug 22, 2018 at 1:28 PM Kevin Olbrich wrote:
>
> Hi!
>
> I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to CephFS.
> This storage is used for backup images (large sequential reads and writes).
>
> To save space and have a RAIDZ2 (RAID6) like setup, I am planning the
is not going to give you same
performance as the raid. But you can compensate that by doing your
backup in parallel.
-Original Message-
From: Kevin Olbrich [mailto:k...@sv01.de]
Sent: woensdag 22 augustus 2018 14:28
To: ceph-users
Subject: [ceph-users] HDD-only CephFS cluster with EC
I would suggest having some flash media for their own OSDs to put the
cephfs metadata pool onto. That was a pretty significant boost for me when
I moved the metadata pool onto flash media. My home setup is only 3 nodes
and is running EC 2+1 on pure HDD OSDs with metadata on SSDs. It's been
Not 3+2, but we run 4+2, 6+2, 6+3, 5+3, and 8+3 with cephfs in
production. Most of them are HDDs without separate DB devices.
Paul
2018-08-22 14:27 GMT+02:00 Kevin Olbrich :
> Hi!
>
> I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to
> CephFS.
> This storage is used for
Hi!
I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to
CephFS.
This storage is used for backup images (large sequential reads and writes).
To save space and have a RAIDZ2 (RAID6) like setup, I am planning the
following profile:
ceph osd erasure-code-profile set myprofile \