Do you do things like[1] with the vm's?

[1]
echo 120 > /sys/block/sda/device/timeout



-----Original Message-----
From: Francois Legrand [mailto:f...@lpnhe.in2p3.fr] 
Sent: donderdag 25 juni 2020 19:25
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Removing pool in nautilus is incredibly slow

I also had this kind of symptoms with nautilus. 
Replacing a failed disk (from cluster ok) generates degraded objects.
Also, we have a proxmox cluster accessing vm images stored in our ceph 
storage with rbd. 
Each time I had some operation on the ceph cluster like adding or 
removing a pool, most of our proxmox vms lost contact with their system 
disk in ceph and crashed (or remount system storage in read-only mode). 
At first I thought it was a network problem, but now I am sure that it's 
related to ceph becoming unresponsive during background operations.
For now, proxmox cannot even access ceph storage using rbd (it fails 
with timeout).
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to