[ceph-users] Re: ceph dashboard reef 18.2.2 radosgw

2024-05-12 Thread Pierre Riteau
Hi Christopher,

I think your issue may be fixed by https://github.com/ceph/ceph/pull/54764,
which should be included in the next Reef release.
In the meantime, you should be able to update your RGW configuration to
include port=80. You will need to restart every RGW daemon so that all the
metadata is updated.

Best wishes,
Pierre Riteau

On Wed, 8 May 2024 at 19:51, Christopher Durham  wrote:

> Hello,
> I am uisng 18.2.2 on Rocky 8 Linux.
>
> I am getting http error 500 whe trying to hit the ceph dashboard on reef
> 18.2.2 when trying to look at any of the radosgw pages.
> I tracked this down to /usr/share/ceph/mgr/dashboard/controllers/rgw.py
> It appears to parse the metadata for a given radosgw server improperly. In
> my varoous rgw ceph.conf entries, I have:
> rgw frontends = beast ssl_endpoint=0.0.0.0
> ssl_certificate=/path/to/pem_with_cert_and_key
> but, rgw.py pulls the metadata for each server, and it is looking for
> 'port=' in the metadata for each server. When it doesn't find it based on
> line 147 in rgw.py, the ceph-mgr logs throwan exception which the manager
> proper catches and returns a 500.
> Would changing my frontends definition work? Is this known? I have had the
> frontends definition for awhile prior to my reef upgrade. Thanks
> -Chris
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Multisite: metadata behind on shards

2024-05-12 Thread Szabo, Istvan (Agoda)
Hi,

Wonder what is the mechanism behind the sync mechanism because I need to 
restart all the gateways every 2 days on the remote sites to keep those it in 
sync. (Octopus 15.2.7)

Thank you


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: SPDK with cephadm and reef

2024-05-12 Thread xiaowenhao111
Did you find it? I need this ,too
发自我的小米在 R A ,2024年4月30日 20:41写道:Hello Community,

is there a guide / documentation how to configure spdk with cephadm (running in containers) in reef?

BR

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph reef and (slow) backfilling - how to speed it up

2024-05-12 Thread Anthony D'Atri
I halfway suspect that something akin to the speculation in 
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/7MWAHAY7NCJK2DHEGO6MO4SWTLPTXQMD/
 is going on.

Below are reservations reported by a random OSD that serves (mostly) an EC RGW 
bucket pool.  This is with the mclock override on and the usual three 
backfill/recovery tunables set to 7 (bumped to get more OSDs backfilling after 
I changed to rack failure domain, having 50+ % of objects remapped makes me 
nervous and I want convergence.

3 happens to be the value of osd_recovery_max_active_hdd , so maybe there is 
some interaction between EC and how osd_recovery_max_active is derived and used?

Complete wild-ass speculation.

Just for grins, after `ceph osd down 313`

* local_reservations incremented
* remote_reservations decreased somewhat
* cluster aggregate recovery speed increased for at least the short term


[root@rook-ceph-osd-313-6f84bc5bd5-hr825 ceph]# ceph daemon osd.313 
dump_recovery_reservations
{
"local_reservations": {
"max_allowed": 7,
"min_priority": 0,
"queues": [],
"in_progress": [
{
"item": "21.161es0",
"prio": 110,
"can_preempt": true
},
{
"item": "21.180bs0",
"prio": 110,
"can_preempt": true
},
{
"item": "21.1e0as0",
"prio": 110,
"can_preempt": true
}
]
},
"remote_reservations": {
"max_allowed": 7,
"min_priority": 0,
"queues": [
{
"priority": 110,
"items": [
{
"item": "21.1d18s5",
"prio": 110,
"can_preempt": true
},
{
"item": "21.7d0s2",
"prio": 110,
"can_preempt": true
},
{
"item": "21.766s5",
"prio": 110,
"can_preempt": true
},
{
"item": "21.373s1",
"prio": 110,
"can_preempt": true
},
{
"item": "21.1a8es1",
"prio": 110,
"can_preempt": true
},
{
"item": "21.2das2",
"prio": 110,
"can_preempt": true
},
{
"item": "21.14a0s2",
"prio": 110,
"can_preempt": true
},
{
"item": "21.c7fs5",
"prio": 110,
"can_preempt": true
},
{
"item": "21.18e5s5",
"prio": 110,
"can_preempt": true
},
{
"item": "21.54ds2",
"prio": 110,
"can_preempt": true
},
{
"item": "21.79bs4",
"prio": 110,
"can_preempt": true
},
{
"item": "21.15c3s2",
"prio": 110,
"can_preempt": true
},
{
"item": "21.e15s4",
"prio": 110,
"can_preempt": true
},
{
"item": "21.226s3",
"prio": 110,
"can_preempt": true
},
{
"item": "21.adfs2",
"prio": 110,
"can_preempt": true
},
{
"item": "21.184bs4",
"prio": 110,
"can_preempt": true
},
{
"item": "21.f43s3",
"prio": 110,
"can_preempt": true
},
{
"item": "21.f5cs4",
"prio": 110,
"can_preempt": true
},
{
"item": "21.1300s3",
"prio": 110,
"can_preempt": true
},
{