Hi,
thanks for looking into this: our system disks also wear out too quickly!
Here are the numbers on our small cluster.
Best,
1) iotop results:
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
6426
Le 13/06/2022 à 18:37, Stefan Kooman a écrit :
CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and
know the content is safe.
On 6/13/22 18:21, Eric Le Lay wrote:
Those objects are deleted but have
Le 13/06/2022 à 17:54, Eric Le Lay a écrit :
Le 10/06/2022 à 11:58, Stefan Kooman a écrit :
CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and
know the content is safe.
On 6/10/22 11:41, Eric Le Lay wrote
Le 10/06/2022 à 11:58, Stefan Kooman a écrit :
CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and
know the content is safe.
On 6/10/22 11:41, Eric Le Lay wrote:
Hello list,
my ceph cluster was upgraded
Hello list,
my ceph cluster was upgraded from nautilus to octopus last October,
causing snaptrims
to overload OSDs so I had to disable them (bluefs_buffered_io=false|true
didn't help).
Now I've copied data elsewhere and removed all clients and try to fix
the cluster.
Scraping it and starting
Dear list,
we run a 7 node proxmox cluster with ceph nautilus (14.2.18), with 2
ceph filesystems, mounted in debian buster VMs using the cephfs kernel
module.
4 times in the last 6 months we had all mds servers failing one after
the other with an assert, either in the rename_prepare or unlin