First problem here is you are using crush-failure-domain=osd when you
should use crush-failure-domain=host. With three hosts, you should use k=2,
m=1; this is not recommended in production environment.
On Mon, Dec 4, 2023, 23:26 duluxoz wrote:
> Hi All,
>
> Looking for some help/explanation aro
Thanks David, I knew I had something wrong :-)
Just for my own edification: Why is k=2, m=1 not recommended for
production? Considered to "fragile", or something else?
Cheers
Dulux-Oz
On 05/12/2023 19:53, David Rivera wrote:
First problem here is you are using crush-failure-domain=osd when
And the second issue is with k4 m2 you'll have min_size = 5 which
means if one host is down your PGs become inactive, which is what you
most likely experienced.
Zitat von David Rivera :
First problem here is you are using crush-failure-domain=osd when you
should use crush-failure-domain=hos
On 12/5/23 10:01, duluxoz wrote:
Thanks David, I knew I had something wrong :-)
Just for my own edification: Why is k=2, m=1 not recommended for
production? Considered to "fragile", or something else?
It is the same as a replicated pool with size=2. Only one host can go
down. After that you
-users] Re: EC Profiles & DR
CAUTION: This email originates from outside THG
Thanks David, I knew I had something wrong :-)
Just for my own edification: Why is k=2, m=1 not recommended for
production? Considered to "fragile", or something else?
Cheers
Dulux-Oz
On 05/12/202
On 12/5/23 10:06, duluxoz wrote:
I'm confused - doesn't k4 m2 mean that you can loose any 2 out of the 6
osds?
Yes, but OSDs are not a good failure zone.
The host is the smallest failure zone that is practicable and safe
against data loss.
Regards
--
Robert Sander
Heinlein Consulting GmbH
S
From: Robert Sander
Sent: 05 December 2023 09:20
To: ceph-users@ceph.io
Subject: [ceph-users] Re: EC Profiles & DR
CAUTION: This email originates from outside THG
On 12/5/23 10:06, duluxoz wrote:
> I'm confused - doesn't k4 m2 mean that you can loose any 2 out of the 6
> osd
Hi Robert,
Le 05/12/2023 à 10:05, Robert Sander a écrit :
On 12/5/23 10:01, duluxoz wrote:
Thanks David, I knew I had something wrong :-)
Just for my own edification: Why is k=2, m=1 not recommended for
production? Considered to "fragile", or something else?
It is the same as a replicated p
On Tue, Dec 5, 2023 at 5:16 AM Patrick Begou
wrote:
>
> On my side, I'm working on building my first (small) Ceph cluster using
> E.C. and I was thinking about 5 nodes and k=4 m=2. With a failure domain
> on host and several osd by nodes, in my mind this setup may run degraded
> with 3 nodes using
Hi Matthew,
To make a simplistic comparison, it is generally not recommended to raid 5
with large disks (>1 TB) due to the probability (low but not zero) of
losing another disk during the rebuild.
So imagine losing a host full of disks.
Additionally, min_size=1 means you can no longer maintain yo
Hi,
To return to my comparison with SANs, on a SAN you have spare disks to
repair a failed disk.
On Ceph, you therefore need at least one more host (k+m+1).
If we take into consideration the formalities/delivery times of a new
server, k+m+2 is not luxury (Depending on the growth of your volume).
Ok, so I've misunderstood the meaning of failure domain. If there is no
way to request using 2 osd/node and node as failure domain, with 5 nodes
k=3+m=1 is not secure enough and I will have to use k=2+m=2, so like a
raid1 setup. A little bit better than replication in the point of view
of glob
Hi Patrick,
If your hardware is new and you are confident in the support of your
hardware and can consider future expansion, you can possibly start with a
k=3 and m=2.
It is true that we generally prefer to divide (k) the data by an exponent
2, but k=3 does the job
Be careful, it is difficult/pai
On Tue, Dec 5, 2023 at 6:35 AM Patrick Begou
wrote:
>
> Ok, so I've misunderstood the meaning of failure domain. If there is no
> way to request using 2 osd/node and node as failure domain, with 5 nodes
> k=3+m=1 is not secure enough and I will have to use k=2+m=2, so like a
> raid1 setup. A litt
You can structure your crush map so that you get multiple EC chunks per
host in a way that you can still survive a host outage outage even though
you have fewer hosts than k+1
For example if you run an EC=4+2 profile on 3 hosts you can structure your
crushmap so that you have 2 chunks per host. Thi
Le 06/12/2023 à 00:11, Rich Freeman a écrit :
On Tue, Dec 5, 2023 at 6:35 AM Patrick Begou
wrote:
Ok, so I've misunderstood the meaning of failure domain. If there is no
way to request using 2 osd/node and node as failure domain, with 5 nodes
k=3+m=1 is not secure enough and I will have to use
Hi Patrick,
Yes K and M are chunks, but the default crush map is a chunk per host,
which is probably the best way to do it, but I'm no expert. I'm not sure
why you would want to do a crush map with 2 chunks per host and min size 4
as it' s just asking for trouble at some point, in my opinion. Any
is no way around it. I was happy when I got the
extra hosts.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Curt
Sent: Wednesday, December 6, 2023 3:56 PM
To: Patrick Begou
Cc: ceph-users@ceph.io
Subject: [ceph
On Wed, Dec 6, 2023 at 9:25 AM Patrick Begou
wrote:
>
> My understood was that k and m were for EC chunks not hosts. 🙁 Of
> course if k and m are hosts the best choice would be k=2 and m=2.
A few others have already replied - as they said if the failure domain
is set to host then it will put only
rick Begou
Cc:ceph-users@ceph.io
Subject: [ceph-users] Re: EC Profiles & DR
Hi Patrick,
Yes K and M are chunks, but the default crush map is a chunk per host,
which is probably the best way to do it, but I'm no expert. I'm not sure
why you would want to do a crush map with 2 chunks pe
20 matches
Mail list logo