[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-24 Thread 544463199
hi all, I think this patch might fix the problem (https://github.com/ceph/ceph/pull/49954), it hasn't been merged for a long time, I asked a few days ago and got it merged, you can try it. best wishes ___ ceph-users mailing list --

[ceph-users] Re: Moving devices to a different device class?

2023-10-24 Thread Janne Johansson
> > > The documentation describes that I could set a device class for an OSD with > a command like: > > `ceph osd crush set-device-class CLASS OSD_ID [OSD_ID ..]` > > Class names can be arbitrary strings like 'big_nvme". Before setting a new > device class to an OSD that already has an assigned

[ceph-users] Re: Moving devices to a different device class?

2023-10-24 Thread Matt Larson
Anthony, Thank you! This is very helpful information and thanks for the specific advice for these drive types on choosing a 64KB min_alloc_size. I will do some more review as I believe they are likely at the 4KB min_alloc_size if that is the default for the `ssd` device-class. I will look to

[ceph-users] Re: Moving devices to a different device class?

2023-10-24 Thread Anthony D'Atri
Ah, our old friend the P5316. A few things to remember about these: * 64KB IU means that you'll burn through endurance if you do a lot of writes smaller than that. The firmware will try to coalesce smaller writes, especially if they're sequential. You probably want to keep your RGW / CephFS

[ceph-users] Re: Quincy: failure to enable mgr rgw module if not --force

2023-10-24 Thread David C.
Correction, it's not so new but doesn't seem to be maintained : https://github.com/ceph/ceph/commits/v17.2.6/src/pybind/mgr/rgw Cordialement, *David CASIER* Le mar. 24 oct.

[ceph-users] Re: Quincy: failure to enable mgr rgw module if not --force

2023-10-24 Thread David C.
Hi Michel, (I'm just discovering the existence of this module, so it's possible I'm making mistakes) The rgw module is new and only seems to be there to configure multisite. It is present on the v17.2.6 branch but I don't see it in the container for this version. In any case, if you're not

[ceph-users] Re: traffic by IP address / bucket / user

2023-10-24 Thread Brian Andrus
We use HAProxy in front of the Ceph RadosGWs. The logs are kicked to ELK stack where we can filter by those (and many more) values. IP address/geolocation is most easily pulled from switches. On Wed, Oct 18, 2023 at 2:07 AM Boris Behrens wrote: > Hi, > did someone have a solution ready to

[ceph-users] Moving devices to a different device class?

2023-10-24 Thread Matt Larson
I am looking to create a new pool that would be backed by a particular set of drives that are larger nVME SSDs (Intel SSDPF2NV153TZ, 15TB drives). Particularly, I am wondering about what is the best way to move devices from one pool and to direct them to be used in a new pool to be created. In

[ceph-users] Quincy: failure to enable mgr rgw module if not --force

2023-10-24 Thread Michel Jouvin
Hi, I'm trying to use the rgw mgr module to configure RGWs. Unfortunately it is not present in 'ceph mgr module ls' list and any attempt to enable it suggests that one mgr doesn't support it and that --force should be added. Adding --force effectively enabled it. It is strange as it is a

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-24 Thread Patrick Begou
Some tests: If in Nautilus 16.2.14 in /usr/lib/python3.6/site-packages/ceph_volume/util/disk.py I disable lines 804 and 805     804 if get_file_contents(os.path.join(_sys_block_path, dev, 'removable')) == "1":     805 continue the command "ceph-volume inventory" works as

[ceph-users] Re: Modify user op status=-125

2023-10-24 Thread mahnoosh shahidi
Yes this can be the reason. Thanks for your help. Best Regards, Mahnoosh On Tue, Oct 24, 2023 at 5:45 PM Casey Bodley wrote: > i don't suppose you're using sts roles with AssumeRole? > https://tracker.ceph.com/issues/59495 tracks a bug where each > AssumeRole request was writing to the user

[ceph-users] Re: Modify user op status=-125

2023-10-24 Thread Casey Bodley
i don't suppose you're using sts roles with AssumeRole? https://tracker.ceph.com/issues/59495 tracks a bug where each AssumeRole request was writing to the user metadata unnecessarily, which would race with your admin api requests On Tue, Oct 24, 2023 at 9:56 AM mahnoosh shahidi wrote: > >

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-24 Thread Johan
I have checked my disks as well, all devices are hot-swappable hdd and have the removable flag set /Johan Den 2023-10-24 kl. 13:38, skrev Patrick Begou: Hi Eugen, Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to 1. May be because they are hot-swappable hard drives. I

[ceph-users] Re: Modify user op status=-125

2023-10-24 Thread mahnoosh shahidi
Thanks Casey for your explanation, Yes it succeeded eventually. Sometimes after about 100 retries. It's odd that it stays in racing condition for that much time. Best Regards, Mahnoosh On Tue, Oct 24, 2023 at 5:17 PM Casey Bodley wrote: > errno 125 is ECANCELED, which is the code we use when

[ceph-users] Re: Modify user op status=-125

2023-10-24 Thread Casey Bodley
errno 125 is ECANCELED, which is the code we use when we detect a racing write. so it sounds like something else is modifying that user at the same time. does it eventually succeed if you retry? On Tue, Oct 24, 2023 at 9:21 AM mahnoosh shahidi wrote: > > Hi all, > > I couldn't understand what

[ceph-users] Modify user op status=-125

2023-10-24 Thread mahnoosh shahidi
Hi all, I couldn't understand what does the status -125 mean from the docs. I'm getting 500 response status code when I call rgw admin APIs and the only log in the rgw log files is as follows. s3:get_obj recalculating target initializing for trans_id =

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-24 Thread Eugen Block
Hi, May be because they are hot-swappable hard drives. yes, that's my assumption as well. Zitat von Patrick Begou : Hi Eugen, Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to 1. May be because they are hot-swappable hard drives. I have contacted the commit

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-24 Thread Patrick Begou
Hi Eugen, Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to 1. May be because they are hot-swappable hard drives. I have contacted the commit author Zack Cerza and he asked me for some additional tests too this morning. I add him in copy to this mail. Patrick Le

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-24 Thread Eugen Block
Hi, just to confirm, could you check that the disk which is *not* discovered by 16.2.11 has a "removable" flag? cat /sys/block/sdX/removable I could reproduce it as well on a test machine with a USB thumb drive (live distro) which is excluded in 16.2.11 but is shown in 16.2.10. Although