> so size 4 / min_size 2 would be a lot better (of course)
More copies (or parity) are always more reliable, but one quickly gets into
diminishing returns.
In your scenario you might look into stretch mode, which currently would
require 4 replicas. In the future maybe it could support EC
Hi,
I ended up with having whole set of osds to get back original ceph cluster.
I figured out to make the cluster running. However, it's status is
something as below:
bash-4.4$ ceph -s
cluster:
id: 3f271841-6188-47c1-b3fd-90fd4f978c76
health: HEALTH_WARN
7 daemons
Hi,
DIRTY field had been removed in Octopus v15.2.15 if cache tiering is not
used.
Check [1] for the PR and [2] for the release which includes this PR.
[1] https://github.com/ceph/ceph/pull/42862
[2] https://docs.ceph.com/en/latest/releases/octopus/
On Fri, Mar 3, 2023 at 5:39 AM wrote:
> Hi,
Hi, can you paste this output:
ceph orch ls osd —export
Zitat von claas.go...@contact-software.com:
Hi Community,
currently i’m installing an nvme only storage cluster with cephadm
from scratch (v.17.2.5). Everything works fine. Each of my nodes (6)
has 3 enterprise nvme’s with 7TB
Hi,
I have some orchestrator issues on our cluster running 16.2.9 with rgw
only services.
We first noticed these issues a few weeks ago when adding new hosts to
the cluster - the orch was not detecting the new drives to build the
osd containers for them. Debugging the mgr logs, I noticed
Den lör 4 mars 2023 kl 08:08 skrev :
> ceph 16.2.11,
> is safe to enable scrub and deep scrub during backfilling ?
> I have log recovery-backfilling due to a new crushmap , backfilling is going
> slow and deep scrub interval as expired so I have many pgs not deep-scrubbed
> in time.
It is safe