[ceph-users] HDD-only CephFS cluster with EC and without SSD/NVMe

2018-08-22 Thread Kevin Olbrich
Hi!

I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to
CephFS.
This storage is used for backup images (large sequential reads and writes).

To save space and have a RAIDZ2 (RAID6) like setup, I am planning the
following profile:

ceph osd erasure-code-profile set myprofile \
   k=3 \
   m=2 \
   ruleset-failure-domain=rack

Performance is not the first priority, this is why I do not plan to
outsource WAL/DB (broken NVMe = broken OSDs is more administrative overhead
then single OSDs).
Disks are attached by SAS multipath, throughput in general is no problem
but I did not test with ceph yet.

Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is it
working well?

Thank you.

Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] HDD-only CephFS cluster with EC and without SSD/NVMe

2018-08-22 Thread Paul Emmerich
Not 3+2, but we run 4+2, 6+2, 6+3, 5+3, and 8+3 with cephfs in
production. Most of them are HDDs without separate DB devices.



Paul

2018-08-22 14:27 GMT+02:00 Kevin Olbrich :
> Hi!
>
> I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to
> CephFS.
> This storage is used for backup images (large sequential reads and writes).
>
> To save space and have a RAIDZ2 (RAID6) like setup, I am planning the
> following profile:
>
> ceph osd erasure-code-profile set myprofile \
>k=3 \
>m=2 \
>ruleset-failure-domain=rack
>
> Performance is not the first priority, this is why I do not plan to
> outsource WAL/DB (broken NVMe = broken OSDs is more administrative overhead
> then single OSDs).
> Disks are attached by SAS multipath, throughput in general is no problem but
> I did not test with ceph yet.
>
> Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is it
> working well?
>
> Thank you.
>
> Kevin
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] HDD-only CephFS cluster with EC and without SSD/NVMe

2018-08-22 Thread David Turner
I would suggest having some flash media for their own OSDs to put the
cephfs metadata pool onto.  That was a pretty significant boost for me when
I moved the metadata pool onto flash media.  My home setup is only 3 nodes
and is running EC 2+1 on pure HDD OSDs with metadata on SSDs.  It's been
running stable and fine for a couple years now.  I wouldn't suggest running
EC 2+1 for any data you can't lose, but I can replace anything in there
with some time.

On Wed, Aug 22, 2018 at 8:43 AM Paul Emmerich 
wrote:

> Not 3+2, but we run 4+2, 6+2, 6+3, 5+3, and 8+3 with cephfs in
> production. Most of them are HDDs without separate DB devices.
>
>
>
> Paul
>
> 2018-08-22 14:27 GMT+02:00 Kevin Olbrich :
> > Hi!
> >
> > I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to
> > CephFS.
> > This storage is used for backup images (large sequential reads and
> writes).
> >
> > To save space and have a RAIDZ2 (RAID6) like setup, I am planning the
> > following profile:
> >
> > ceph osd erasure-code-profile set myprofile \
> >k=3 \
> >m=2 \
> >ruleset-failure-domain=rack
> >
> > Performance is not the first priority, this is why I do not plan to
> > outsource WAL/DB (broken NVMe = broken OSDs is more administrative
> overhead
> > then single OSDs).
> > Disks are attached by SAS multipath, throughput in general is no problem
> but
> > I did not test with ceph yet.
> >
> > Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is
> it
> > working well?
> >
> > Thank you.
> >
> > Kevin
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90 <+49%2089%20189658590>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] HDD-only CephFS cluster with EC and without SSD/NVMe

2018-08-22 Thread Marc Roos
 

I also have 2+1 (still only 3 nodes), and 3 replicated. I also moved the 
meta datapool to ssds.
What is nice with the cephfs, you can have folders in your filesystem on 
the ec21 pool for not so important data and the rest will be 3x 
replicated. 

I think the single session performance is not going to give you same 
performance as the raid. But you can compensate that by doing your 
backup in parallel.





-Original Message-
From: Kevin Olbrich [mailto:k...@sv01.de] 
Sent: woensdag 22 augustus 2018 14:28
To: ceph-users
Subject: [ceph-users] HDD-only CephFS cluster with EC and without 
SSD/NVMe

Hi!

I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to 
CephFS.
This storage is used for backup images (large sequential reads and 
writes).

To save space and have a RAIDZ2 (RAID6) like setup, I am planning the 
following profile:

ceph osd erasure-code-profile set myprofile \
   k=3 \
   m=2 \
   ruleset-failure-domain=rack


Performance is not the first priority, this is why I do not plan to 
outsource WAL/DB (broken NVMe = broken OSDs is more administrative 
overhead then single OSDs).
Disks are attached by SAS multipath, throughput in general is no problem 
but I did not test with ceph yet.

Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is 
it working well?

Thank you.

Kevin


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] HDD-only CephFS cluster with EC and without SSD/NVMe

2018-08-22 Thread John Spray
On Wed, Aug 22, 2018 at 1:28 PM Kevin Olbrich  wrote:
>
> Hi!
>
> I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to CephFS.
> This storage is used for backup images (large sequential reads and writes).
>
> To save space and have a RAIDZ2 (RAID6) like setup, I am planning the 
> following profile:
>
> ceph osd erasure-code-profile set myprofile \
>k=3 \
>m=2 \
>ruleset-failure-domain=rack
>
> Performance is not the first priority, this is why I do not plan to outsource 
> WAL/DB (broken NVMe = broken OSDs is more administrative overhead then single 
> OSDs).
> Disks are attached by SAS multipath, throughput in general is no problem but 
> I did not test with ceph yet.
>
> Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is it 
> working well?

I have a very small home cluster that's 6x OSDs over 3 nodes, using EC
on bluestore on spinning disks.  I don't have benchmarks, but it was
usable for a few TB of backups.

John

>
> Thank you.
>
> Kevin
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com