[ceph-users] Re: octopus garbage collector makes slow ops

2021-07-30 Thread mahnoosh shahidi
Hi Mark, Thanks for your response. I did manual compaction on all osds using ceph-kvstore-tool. It reduced the number of slow ops but It didn't solve the problem completely. On Mon, Jul 26, 2021 at 8:06 PM Mark Nelson wrote: > Yeah, I suspect that regular manual compaction might be the necessary

[ceph-users] [cinder-backup][ceph] replicate volume between sites

2021-07-30 Thread Tony Liu
Hi, I have two sites with OpenStack Victoria deployed by Kolla and Ceph Octopus deployed by cephadm. As what I know, either Swift (implemented by RADOSGW) or RBD is supported to be the backend of cinder-backup. My intention is to use one of those option to replicate Cinder volume from one site to

[ceph-users] Maturity of Cephadm vs ceph-ansible for new Pacific deployments

2021-07-30 Thread Alex Petty
Hello, I'm seeking some community opinions about the stability of Cephadm on a recent Ceph release ,like V16.2.5. Cephadm looks like a more streamlined and quicker initial deployment process, but I'd like to hear thoughts from someone who has lived with it for some time. Additionally, I see less

[ceph-users] create a Multi-zone-group sync setup

2021-07-30 Thread Boris Behrens
Hi people, I try to create a Multi-zone-group setup (like it is described here: https://docs.ceph.com/en/latest/radosgw/multisite/) But I simply fail. I just created a testcluster to mess with it, and no matter how I try to. Is there a howto avaialable? I don't want to get a multi-zone setup,

[ceph-users] Re: Rogue osd / CephFS / Adding osd

2021-07-30 Thread Janne Johansson
Den fre 30 juli 2021 kl 15:22 skrev Thierry MARTIN : > Hi all ! > We are facing strange behaviors from two clusters we have at work (both > v15.2.9 / CentOS 7.9): > * In the 1st cluster we are getting errors about multiple degraded pgs > and all of them are linked with a "rogue" osd which ID

[ceph-users] Rogue osd / CephFS / Adding osd

2021-07-30 Thread Thierry MARTIN
Hi all ! We are facing strange behaviors from two clusters we have at work (both v15.2.9 / CentOS 7.9): * In the 1st cluster we are getting errors about multiple degraded pgs and all of them are linked with a "rogue" osd which ID is very big (as "osd.2147483647"). This osd doesn't show wi

[ceph-users] Re: Octopus dashboard displaying the wrong OSD version

2021-07-30 Thread Ernesto Puerta
Hi Shain, Thanks for the update. I didn't find any screenshot in your previous email (maybe the list server removed that). Just for tracking purposes and for other users hitting this very same issue, would you mind creating a tracker here (https://tracker.ceph.com/projects/dashboard/issues/new) an

[ceph-users] Re: Dashboard Montitoring: really suppress messages

2021-07-30 Thread Eugen Block
Hi, you can disable or modify the configured alerts in: /var/lib/ceph//etc/prometheus/alerting/ceph_alerts.yml After restarting the container those changes should be applied. Regards, Eugen Zitat von E Taka <0eta...@gmail.com>: Hi, we have enabled Cluster → Monitoring in the Dashboard. S

[ceph-users] Dashboard Montitoring: really suppress messages

2021-07-30 Thread E Taka
Hi, we have enabled Cluster → Monitoring in the Dashboard. Some of the regularly shown messages are not really useful for us (packet drops in OVS) and we want to suppress them. Creating a silence does not help, because the messages still appear, but in blue instead of red color. Is there a way t

[ceph-users] Re: iSCSI HA (ALUA): Single disk image shared by multiple iSCSI gateways

2021-07-30 Thread Paulo Carvalho
Hi, I'm sorry, I was searching in the wrong way, but now things are doing well. Thank you. Best regards, Paulo Carvalho - Mensagem original- De: Paulo Carvalho Para: ceph-users@ceph.io Assunto: iSCSI HA (ALUA): Single disk image shared by multiple iSCSI gateways Data: Thu, 29 Jul 202