[ceph-users] Moving from ceph-ansible to cephadm and upgrading from pacific to octopus

2023-12-07 Thread wodel youchi
Hi, I have an Openstack platform deployed with Yoga and ceph-ansible pacific on Rocky 8. Now I need to do an upgrade to Openstack zed with octopus on Rocky 9. This is the path of the upgrade I have traced - upgrade my nodes to Rocky 9 keeping Openstack yoga with ceph-ansible pacific. - convert c

[ceph-users] MDS recovery with existing pools

2023-12-07 Thread Eugen Block
Hi, following up on the previous thread (After hardware failure tried to recover ceph and followed instructions for recovery using OSDS), we were able to get ceph back into a healthy state (including the unfound object). Now the CephFS needs to be recovered and I'm having trouble to fully

[ceph-users] Re: Difficulty adding / using a non-default RGW placement target & storage class

2023-12-07 Thread Anthony D'Atri
Following up on my own post from last month, for posterity. The trick was updating the period. I'm not using multisite, but Rook seems to deploy so that one can. -- aad > On Nov 6, 2023, at 16:52, Anthony D'Atri wrote: > > I'm having difficulty adding and using a non-default placement target

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-12-07 Thread Yuri Weinstein
The issue https://github.com/ceph/ceph/pull/54772 was resolved and we continue with the 18.2.1 release On Fri, Dec 1, 2023 at 11:12 AM Igor Fedotov wrote: > > Hi Yuri, > > Looks like that's not THAT critical and complicated as it's been thought > originally. User has to change bluefs_shared_alloc

[ceph-users] nfs export over RGW issue in Pacific

2023-12-07 Thread Adiga, Anantha
Hi, oot@a001s016:~# cephadm version Using recent ceph image ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586 ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable) root@a001s016:~# root@a001s016:~# cephadm shell Inferring fsid 60

[ceph-users] Re: nfs export over RGW issue in Pacific

2023-12-07 Thread Adam King
The first handling of nfs exports over rgw in the nfs module, including the `ceph nfs export create rgw` command, wasn't added to the nfs module in pacific until 16.2.7. On Thu, Dec 7, 2023 at 1:35 PM Adiga, Anantha wrote: > Hi, > > > oot@a001s016:~# cephadm version > > Using recent ceph image c

[ceph-users] Re: nfs export over RGW issue in Pacific

2023-12-07 Thread Adiga, Anantha
Thank you Adam!! Anantha From: Adam King Sent: Thursday, December 7, 2023 10:46 AM To: Adiga, Anantha Cc: ceph-users@ceph.io Subject: Re: [ceph-users] nfs export over RGW issue in Pacific The first handling of nfs exports over rgw in the nfs module, including the `ceph nfs export create rgw`

[ceph-users] How to replace a disk with minimal impact on performance

2023-12-07 Thread Michal Strnad
Hi guys! Based on our observation of the impact of the balancer on the performance of the entire cluster, we have drawn conclusions that we would like to discuss with you. - A newly created pool should be balanced before being handed over to the user. This, I believe, is quite evident.