[ceph-users] openstack rgw swift -- reef vs quincy

2023-09-16 Thread Shashi Dahal
Hi All, We have 3 openstack clusters, each with their own ceph. The openstack versions are identical( using openstack-ansible) and all rgw-keystone related configs are also the same. The only difference is the ceph version .. one is pacific, quincy while the other (new) one is reef. The

[ceph-users] cephadm, new OSD

2023-06-28 Thread Shashi Dahal
Hi, I added new OSD on ceph servers. ( orch is cephadm) Its recognized as osd.12 and osd.13 ceph pg dump shows no pg are there in osd 12 and 13 .. they are all empty .. ceph osd tree shows them that they are up. ceph osd df shows them to be all 0 in reweight and size etc. ceph orch device ls

[ceph-users] cephadm and remoto package

2023-05-15 Thread Shashi Dahal
Hi, I followed this documentation: https://docs.ceph.com/en/pacific/cephadm/adoption/ This is the error I get when trying to enable cephadm. ceph mgr module enable cephadm Error ENOENT: module 'cephadm' reports that it cannot run on the active manager daemon: loading remoto library:No module

[ceph-users] ceph quincy rgw openstack howto

2023-01-18 Thread Shashi Dahal
Hi, How to set up values for rgw_keystone_url and other related fields that are not possible to change via the GUI under cluster configuration. ? ceph qunicy is deployed using cephadm. -- Cheers, Shashi ___ ceph-users mailing list --

[ceph-users] NoSuchBucket when bucket exists ..

2023-01-16 Thread Shashi Dahal
Hi, In a working All-in-one test setup ( where making the bucket public works from the browser) radosgw-admin bucket list [ "711138fc95764303b83002c567ce0972/demo" ] I have another cluster where openstack and ceph are separate. I have set same config options in ceph.conf ..

[ceph-users] NoSuchBucket when bucket exists ..

2023-01-09 Thread Shashi Dahal
Hi, In a working All-in-one(AIO) test setup of openstack & ceph ( where making the bucket public works from the browser) radosgw-admin bucket list [ "711138fc95764303b83002c567ce0972/demo" ] I have another cluster where openstack and ceph are separate. I have set the same config options

[ceph-users] Re: all monitors deleted, state recovered using documentation .. at what point to start osds ?

2022-11-10 Thread Shashi Dahal
output. > > Since you have a monitor quorum 1 out of 1, you can start up OSDs. but I > would recommend getting all your mons/mgrs back up first. > > On Tue, Nov 8, 2022 at 5:56 PM Shashi Dahal wrote: > >> Hi, >> >> Unfortunately, all 3 monitors were lost. >> I follow

[ceph-users] all monitors deleted, state recovered using documentation .. at what point to start osds ?

2022-11-08 Thread Shashi Dahal
Hi, Unfortunately, all 3 monitors were lost. I followed this -> https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-mon/#mon-store-recovery-using-osds and it is in the current state now. id: 234c6a96-8101-49d1-b354-1110e759d572 health: HEALTH_WARN mon is