Hello ceph-users,
Recently I started preparing 3-node Ceph cluster (on bare metal hardware)
We have the HW configuration ready - 3 servers HPE Synergy 480 Gen10
Compute Module, each server with 2xCPUs Intel Xeon-Gold 6252
(2.1GHz/24-core), 192GB RAM, 2x300GB HDD for OS RHEL 8.6 (already
insta
Hi,
no pool is EC.
Primary affinity works in Octopus on replicated pool.
Nautilus EC pool works.
On 5/20/22 19:25, denispo...@gmail.com wrote:
Hi,
no pool is EC.
20. 5. 2022 18:19:22 Dan van der Ster :
Hi,
Just a curiosity... It looks like you're comparing an EC pool in
octop
Hi,
I found this request [1] for version 18, it seems as if that’s not
easily possible at the moment.
[1] https://tracker.ceph.com/issues/54308
Zitat von Vladimir Brik :
Hello
Is it possible to increase to increase the retention period of the
prometheus service deployed with cephadm?
Hi,
no pool is EC.
20. 5. 2022 18:19:22 Dan van der Ster :
> Hi,
>
> Just a curiosity... It looks like you're comparing an EC pool in octopus to a
> replicated pool in nautilus. Does primary affinity work for you in octopus on
> a replicated pool? And does a nautilus EC pool work?
>
> .. Dan
Hi,
Just a curiosity... It looks like you're comparing an EC pool in octopus to
a replicated pool in nautilus. Does primary affinity work for you in
octopus on a replicated pool? And does a nautilus EC pool work?
.. Dan
On Fri., May 20, 2022, 13:53 Denis Polom, wrote:
> Hi
>
> I observed hig
Hi,
yes, I had to change the procedure also.
1. Stop osd daemon
2. mark osd out in crush map
But as you are writing, that makes PGs degraded.
However it still looks like bug to me.
20. 5. 2022 17:25:47 Wesley Dillingham :
> This sounds similar to an inquiry I submitted a couple years ago [1] wh
This sounds similar to an inquiry I submitted a couple years ago [1]
whereby I discovered that the choose_acting function does not consider
primary affinity when choosing the primary osd. I had made the assumption
it would when developing my procedure for replacing failing disks. After
that discove
To clarify a bit, "ceph orch host rm --force" won't actually
touch any of the daemons on the host. It just stops cephadm from managing
the host. I.e. it won't add/remove daemons on the host. If you remove the
host then re-add it with the new host name nothing should actually happen
to the daemons
Hi
I observed high latencies and mount points hanging since Octopus release
and it's still observed on Pacific latest while draining OSD.
Cluster setup:
Ceph Pacific 16.2.7
Cephfs with EC data pool
EC profile setup:
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-
Hey Adam,
thanks for your fast reply.
That's a bit more invasive and risky than I was hoping for.
But if this is the only way, I guess we need to do this.
Would it be advisable to put some maintenance flags like noout, nobackfill,
norebalance?
And maybe stop the ceph target on the host I'm re-a
10 matches
Mail list logo