[ceph-users] Re: radosgw new zonegroup hammers master with metadata sync

2023-07-04 Thread Boris Behrens
Are there any ideas how to work with this?
We disabled the logging so we do not run our of diskspace, but the rgw
daemon still requires A LOT of cpu because of this.

Am Mi., 21. Juni 2023 um 10:45 Uhr schrieb Boris Behrens :

> I've update the dc3 site from octopus to pacific and the problem is still
> there.
> I find it very weird that in only happens from one single zonegroup to the
> master and not from the other two.
>
> Am Mi., 21. Juni 2023 um 01:59 Uhr schrieb Boris Behrens :
>
>> I recreated the site and the problem still persists.
>>
>> I've upped the logging and saw this for a lot of buckets (i've stopped
>> the debug log after some seconds).
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 20 get_system_obj_state:
>> rctx=0x7fcaab7f9320 obj=dc3.rgw.meta:root:s3bucket-fra2
>> state=0x7fcba05ac0a0 s->prefetch_data=0
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
>> name=dc3.rgw.meta+root+s3bucket-fra2 : miss
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
>> name=dc3.rgw.meta+root+s3bucket-fra2 info.flags=0x6
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 adding
>> dc3.rgw.meta+root+s3bucket-fra2 to cache LRU end
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
>> name=dc3.rgw.meta+root+s3bucket-fra2 : type miss (requested=0x1, cached=0x6)
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
>> name=dc3.rgw.meta+root+s3bucket-fra2 info.flags=0x1
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 moving
>> dc3.rgw.meta+root+s3bucket-fra2 to cache LRU end
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 20 get_system_obj_state:
>> rctx=0x7fcaab7f9320
>> obj=dc3.rgw.meta:root:.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
>> state=0x7fcba43ce0a0 s->prefetch_data=0
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
>> name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
>> : miss
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
>> name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
>> info.flags=0x16
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 adding
>> dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
>> to cache LRU end
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
>> name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
>> : type miss (requested=0x13, cached=0x16)
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
>> name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
>> info.flags=0x13
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 moving
>> dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
>> to cache LRU end
>> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 chain_cache_entry:
>> cache_locator=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
>>
>> Am Di., 20. Juni 2023 um 19:29 Uhr schrieb Boris :
>>
>>> Hi Casey,
>>> already did restart all RGW instances.  Only helped for 2 minutes. We
>>> now stopped the new site.
>>>
>>> I will remove and recreate it later.
>>> As twi other sites don't have the problem I currently think I made a
>>> mistake in the process.
>>>
>>> Mit freundlichen Grüßen
>>>  - Boris Behrens
>>>
>>> > Am 20.06.2023 um 18:30 schrieb Casey Bodley :
>>> >
>>> > hi Boris,
>>> >
>>> > we've been investigating reports of excessive polling from metadata
>>> > sync. i just opened https://tracker.ceph.com/issues/61743 to track
>>> > this. restarting the secondary zone radosgws should help as a
>>> > temporary workaround
>>> >
>>> >> On Tue, Jun 20, 2023 at 5:57 AM Boris Behrens  wrote:
>>> >>
>>> >> Hi,
>>> >> yesterday I added a new zonegroup and it looks like it seems to cycle
>>> over
>>> >> the same requests over and over again.
>>> >>
>>> >> In the log of the main zone I see these requests:
>>> >> 2023-06-20T09:48:37.979+ 7f8941fb3700  1 beast: 0x7f8a602f3700:
>>> >> fd00:2380:0:24::136 - - [2023-06-20T09:48:37.979941+] "GET
>>> >>
>>> /admin/log?type=metadata&id=62&period=e8fc96f1-ae86-4dc1-b432-470b0772fded&max-entries=100&&rgwx-zonegroup=b39392eb-75f8-47f0-b4f3-7d3882930b26
>>> >> HTTP/1.1" 200 44 - - -
>>> >>
>>> >> Only thing that changes is the &id.
>>> >>
>>> >> We have two other zonegroups that are configured identical (ceph.conf
>>> and
>>> >> period) and these don;t seem to spam the main rgw.
>>> >>
>>> >> root@host:~# radosgw-admin sync status
>>> >>  realm 5d6f2ea4-b84a-459b-bce2-bccac338b3ef (main)
>>> >>  zonegroup b39392eb-75f8-47f0-b4f3-7d3882930b26 (dc3)
>>> >>   zone 96f5eca9-425b-4194-a152-86e310e91ddb (dc3)
>>> >>  metadata sync syncing
>>> >>full sync: 0/64 shards
>>> >>incremental sync: 64/64 shards
>>> >>metadata is caught up with master
>>> >>
>>> >> root

[ceph-users] Re: radosgw new zonegroup hammers master with metadata sync

2023-06-21 Thread Boris Behrens
I've update the dc3 site from octopus to pacific and the problem is still
there.
I find it very weird that in only happens from one single zonegroup to the
master and not from the other two.

Am Mi., 21. Juni 2023 um 01:59 Uhr schrieb Boris Behrens :

> I recreated the site and the problem still persists.
>
> I've upped the logging and saw this for a lot of buckets (i've stopped the
> debug log after some seconds).
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 20 get_system_obj_state:
> rctx=0x7fcaab7f9320 obj=dc3.rgw.meta:root:s3bucket-fra2
> state=0x7fcba05ac0a0 s->prefetch_data=0
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
> name=dc3.rgw.meta+root+s3bucket-fra2 : miss
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
> name=dc3.rgw.meta+root+s3bucket-fra2 info.flags=0x6
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 adding
> dc3.rgw.meta+root+s3bucket-fra2 to cache LRU end
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
> name=dc3.rgw.meta+root+s3bucket-fra2 : type miss (requested=0x1, cached=0x6)
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
> name=dc3.rgw.meta+root+s3bucket-fra2 info.flags=0x1
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 moving
> dc3.rgw.meta+root+s3bucket-fra2 to cache LRU end
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 20 get_system_obj_state:
> rctx=0x7fcaab7f9320
> obj=dc3.rgw.meta:root:.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
> state=0x7fcba43ce0a0 s->prefetch_data=0
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
> name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
> : miss
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
> name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
> info.flags=0x16
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 adding
> dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
> to cache LRU end
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
> name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
> : type miss (requested=0x13, cached=0x16)
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
> name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
> info.flags=0x13
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 moving
> dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
> to cache LRU end
> 2023-06-20T23:32:29.365+ 7fcaab7fe700 10 chain_cache_entry:
> cache_locator=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
>
> Am Di., 20. Juni 2023 um 19:29 Uhr schrieb Boris :
>
>> Hi Casey,
>> already did restart all RGW instances.  Only helped for 2 minutes. We now
>> stopped the new site.
>>
>> I will remove and recreate it later.
>> As twi other sites don't have the problem I currently think I made a
>> mistake in the process.
>>
>> Mit freundlichen Grüßen
>>  - Boris Behrens
>>
>> > Am 20.06.2023 um 18:30 schrieb Casey Bodley :
>> >
>> > hi Boris,
>> >
>> > we've been investigating reports of excessive polling from metadata
>> > sync. i just opened https://tracker.ceph.com/issues/61743 to track
>> > this. restarting the secondary zone radosgws should help as a
>> > temporary workaround
>> >
>> >> On Tue, Jun 20, 2023 at 5:57 AM Boris Behrens  wrote:
>> >>
>> >> Hi,
>> >> yesterday I added a new zonegroup and it looks like it seems to cycle
>> over
>> >> the same requests over and over again.
>> >>
>> >> In the log of the main zone I see these requests:
>> >> 2023-06-20T09:48:37.979+ 7f8941fb3700  1 beast: 0x7f8a602f3700:
>> >> fd00:2380:0:24::136 - - [2023-06-20T09:48:37.979941+] "GET
>> >>
>> /admin/log?type=metadata&id=62&period=e8fc96f1-ae86-4dc1-b432-470b0772fded&max-entries=100&&rgwx-zonegroup=b39392eb-75f8-47f0-b4f3-7d3882930b26
>> >> HTTP/1.1" 200 44 - - -
>> >>
>> >> Only thing that changes is the &id.
>> >>
>> >> We have two other zonegroups that are configured identical (ceph.conf
>> and
>> >> period) and these don;t seem to spam the main rgw.
>> >>
>> >> root@host:~# radosgw-admin sync status
>> >>  realm 5d6f2ea4-b84a-459b-bce2-bccac338b3ef (main)
>> >>  zonegroup b39392eb-75f8-47f0-b4f3-7d3882930b26 (dc3)
>> >>   zone 96f5eca9-425b-4194-a152-86e310e91ddb (dc3)
>> >>  metadata sync syncing
>> >>full sync: 0/64 shards
>> >>incremental sync: 64/64 shards
>> >>metadata is caught up with master
>> >>
>> >> root@host:~# radosgw-admin period get
>> >> {
>> >>"id": "e8fc96f1-ae86-4dc1-b432-470b0772fded",
>> >>"epoch": 92,
>> >>"predecessor_uuid": "5349ac85-3d6d-4088-993f-7a1d4be3835a",
>> >>"sync_status": [
>> >>"",
>> >> ...
>> >>""
>> >>],
>> >>"period_map": {
>> >>"id": "e8fc96f1-ae86-4dc1-b4

[ceph-users] Re: radosgw new zonegroup hammers master with metadata sync

2023-06-20 Thread Boris Behrens
I recreated the site and the problem still persists.

I've upped the logging and saw this for a lot of buckets (i've stopped the
debug log after some seconds).
2023-06-20T23:32:29.365+ 7fcaab7fe700 20 get_system_obj_state:
rctx=0x7fcaab7f9320 obj=dc3.rgw.meta:root:s3bucket-fra2
state=0x7fcba05ac0a0 s->prefetch_data=0
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
name=dc3.rgw.meta+root+s3bucket-fra2 : miss
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
name=dc3.rgw.meta+root+s3bucket-fra2 info.flags=0x6
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 adding
dc3.rgw.meta+root+s3bucket-fra2 to cache LRU end
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
name=dc3.rgw.meta+root+s3bucket-fra2 : type miss (requested=0x1, cached=0x6)
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
name=dc3.rgw.meta+root+s3bucket-fra2 info.flags=0x1
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 moving
dc3.rgw.meta+root+s3bucket-fra2 to cache LRU end
2023-06-20T23:32:29.365+ 7fcaab7fe700 20 get_system_obj_state:
rctx=0x7fcaab7f9320
obj=dc3.rgw.meta:root:.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
state=0x7fcba43ce0a0 s->prefetch_data=0
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
: miss
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
info.flags=0x16
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 adding
dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
to cache LRU end
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache get:
name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
: type miss (requested=0x13, cached=0x16)
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 cache put:
name=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
info.flags=0x13
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 moving
dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29
to cache LRU end
2023-06-20T23:32:29.365+ 7fcaab7fe700 10 chain_cache_entry:
cache_locator=dc3.rgw.meta+root+.bucket.meta.s3bucket-fra2:ff7a8b0c-07e6-463a-861b-78f0adeba8ad.2297866866.29

Am Di., 20. Juni 2023 um 19:29 Uhr schrieb Boris :

> Hi Casey,
> already did restart all RGW instances.  Only helped for 2 minutes. We now
> stopped the new site.
>
> I will remove and recreate it later.
> As twi other sites don't have the problem I currently think I made a
> mistake in the process.
>
> Mit freundlichen Grüßen
>  - Boris Behrens
>
> > Am 20.06.2023 um 18:30 schrieb Casey Bodley :
> >
> > hi Boris,
> >
> > we've been investigating reports of excessive polling from metadata
> > sync. i just opened https://tracker.ceph.com/issues/61743 to track
> > this. restarting the secondary zone radosgws should help as a
> > temporary workaround
> >
> >> On Tue, Jun 20, 2023 at 5:57 AM Boris Behrens  wrote:
> >>
> >> Hi,
> >> yesterday I added a new zonegroup and it looks like it seems to cycle
> over
> >> the same requests over and over again.
> >>
> >> In the log of the main zone I see these requests:
> >> 2023-06-20T09:48:37.979+ 7f8941fb3700  1 beast: 0x7f8a602f3700:
> >> fd00:2380:0:24::136 - - [2023-06-20T09:48:37.979941+] "GET
> >>
> /admin/log?type=metadata&id=62&period=e8fc96f1-ae86-4dc1-b432-470b0772fded&max-entries=100&&rgwx-zonegroup=b39392eb-75f8-47f0-b4f3-7d3882930b26
> >> HTTP/1.1" 200 44 - - -
> >>
> >> Only thing that changes is the &id.
> >>
> >> We have two other zonegroups that are configured identical (ceph.conf
> and
> >> period) and these don;t seem to spam the main rgw.
> >>
> >> root@host:~# radosgw-admin sync status
> >>  realm 5d6f2ea4-b84a-459b-bce2-bccac338b3ef (main)
> >>  zonegroup b39392eb-75f8-47f0-b4f3-7d3882930b26 (dc3)
> >>   zone 96f5eca9-425b-4194-a152-86e310e91ddb (dc3)
> >>  metadata sync syncing
> >>full sync: 0/64 shards
> >>incremental sync: 64/64 shards
> >>metadata is caught up with master
> >>
> >> root@host:~# radosgw-admin period get
> >> {
> >>"id": "e8fc96f1-ae86-4dc1-b432-470b0772fded",
> >>"epoch": 92,
> >>"predecessor_uuid": "5349ac85-3d6d-4088-993f-7a1d4be3835a",
> >>"sync_status": [
> >>"",
> >> ...
> >>""
> >>],
> >>"period_map": {
> >>"id": "e8fc96f1-ae86-4dc1-b432-470b0772fded",
> >>"zonegroups": [
> >>{
> >>"id": "b39392eb-75f8-47f0-b4f3-7d3882930b26",
> >>"name": "dc3",
> >>"api_name": "dc3",
> >>"is_master": "false",
> >>"endpoints": [
> >>],
> >>"hostnames": [
> >>],
> >>"hostnames_s3website

[ceph-users] Re: radosgw new zonegroup hammers master with metadata sync

2023-06-20 Thread Boris
Hi Casey,
already did restart all RGW instances.  Only helped for 2 minutes. We now 
stopped the new site. 

I will remove and recreate it later. 
As twi other sites don't have the problem I currently think I made a mistake in 
the process. 

Mit freundlichen Grüßen
 - Boris Behrens

> Am 20.06.2023 um 18:30 schrieb Casey Bodley :
> 
> hi Boris,
> 
> we've been investigating reports of excessive polling from metadata
> sync. i just opened https://tracker.ceph.com/issues/61743 to track
> this. restarting the secondary zone radosgws should help as a
> temporary workaround
> 
>> On Tue, Jun 20, 2023 at 5:57 AM Boris Behrens  wrote:
>> 
>> Hi,
>> yesterday I added a new zonegroup and it looks like it seems to cycle over
>> the same requests over and over again.
>> 
>> In the log of the main zone I see these requests:
>> 2023-06-20T09:48:37.979+ 7f8941fb3700  1 beast: 0x7f8a602f3700:
>> fd00:2380:0:24::136 - - [2023-06-20T09:48:37.979941+] "GET
>> /admin/log?type=metadata&id=62&period=e8fc96f1-ae86-4dc1-b432-470b0772fded&max-entries=100&&rgwx-zonegroup=b39392eb-75f8-47f0-b4f3-7d3882930b26
>> HTTP/1.1" 200 44 - - -
>> 
>> Only thing that changes is the &id.
>> 
>> We have two other zonegroups that are configured identical (ceph.conf and
>> period) and these don;t seem to spam the main rgw.
>> 
>> root@host:~# radosgw-admin sync status
>>  realm 5d6f2ea4-b84a-459b-bce2-bccac338b3ef (main)
>>  zonegroup b39392eb-75f8-47f0-b4f3-7d3882930b26 (dc3)
>>   zone 96f5eca9-425b-4194-a152-86e310e91ddb (dc3)
>>  metadata sync syncing
>>full sync: 0/64 shards
>>incremental sync: 64/64 shards
>>metadata is caught up with master
>> 
>> root@host:~# radosgw-admin period get
>> {
>>"id": "e8fc96f1-ae86-4dc1-b432-470b0772fded",
>>"epoch": 92,
>>"predecessor_uuid": "5349ac85-3d6d-4088-993f-7a1d4be3835a",
>>"sync_status": [
>>"",
>> ...
>>""
>>],
>>"period_map": {
>>"id": "e8fc96f1-ae86-4dc1-b432-470b0772fded",
>>"zonegroups": [
>>{
>>"id": "b39392eb-75f8-47f0-b4f3-7d3882930b26",
>>"name": "dc3",
>>"api_name": "dc3",
>>"is_master": "false",
>>"endpoints": [
>>],
>>"hostnames": [
>>],
>>"hostnames_s3website": [
>>],
>>"master_zone": "96f5eca9-425b-4194-a152-86e310e91ddb",
>>"zones": [
>>{
>>"id": "96f5eca9-425b-4194-a152-86e310e91ddb",
>>"name": "dc3",
>>"endpoints": [
>>],
>>"log_meta": "false",
>>"log_data": "false",
>>"bucket_index_max_shards": 11,
>>"read_only": "false",
>>"tier_type": "",
>>"sync_from_all": "true",
>>"sync_from": [],
>>"redirect_zone": ""
>>}
>>],
>>"placement_targets": [
>>{
>>"name": "default-placement",
>>"tags": [],
>>"storage_classes": [
>>"STANDARD"
>>]
>>}
>>],
>>"default_placement": "default-placement",
>>"realm_id": "5d6f2ea4-b84a-459b-bce2-bccac338b3ef",
>>"sync_policy": {
>>"groups": []
>>}
>>},
>> ...
>> 
>> --
>> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
>> groüen Saal.
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: radosgw new zonegroup hammers master with metadata sync

2023-06-20 Thread Casey Bodley
hi Boris,

we've been investigating reports of excessive polling from metadata
sync. i just opened https://tracker.ceph.com/issues/61743 to track
this. restarting the secondary zone radosgws should help as a
temporary workaround

On Tue, Jun 20, 2023 at 5:57 AM Boris Behrens  wrote:
>
> Hi,
> yesterday I added a new zonegroup and it looks like it seems to cycle over
> the same requests over and over again.
>
> In the log of the main zone I see these requests:
> 2023-06-20T09:48:37.979+ 7f8941fb3700  1 beast: 0x7f8a602f3700:
> fd00:2380:0:24::136 - - [2023-06-20T09:48:37.979941+] "GET
> /admin/log?type=metadata&id=62&period=e8fc96f1-ae86-4dc1-b432-470b0772fded&max-entries=100&&rgwx-zonegroup=b39392eb-75f8-47f0-b4f3-7d3882930b26
> HTTP/1.1" 200 44 - - -
>
> Only thing that changes is the &id.
>
> We have two other zonegroups that are configured identical (ceph.conf and
> period) and these don;t seem to spam the main rgw.
>
> root@host:~# radosgw-admin sync status
>   realm 5d6f2ea4-b84a-459b-bce2-bccac338b3ef (main)
>   zonegroup b39392eb-75f8-47f0-b4f3-7d3882930b26 (dc3)
>zone 96f5eca9-425b-4194-a152-86e310e91ddb (dc3)
>   metadata sync syncing
> full sync: 0/64 shards
> incremental sync: 64/64 shards
> metadata is caught up with master
>
> root@host:~# radosgw-admin period get
> {
> "id": "e8fc96f1-ae86-4dc1-b432-470b0772fded",
> "epoch": 92,
> "predecessor_uuid": "5349ac85-3d6d-4088-993f-7a1d4be3835a",
> "sync_status": [
> "",
> ...
> ""
> ],
> "period_map": {
> "id": "e8fc96f1-ae86-4dc1-b432-470b0772fded",
> "zonegroups": [
> {
> "id": "b39392eb-75f8-47f0-b4f3-7d3882930b26",
> "name": "dc3",
> "api_name": "dc3",
> "is_master": "false",
> "endpoints": [
> ],
> "hostnames": [
> ],
> "hostnames_s3website": [
> ],
> "master_zone": "96f5eca9-425b-4194-a152-86e310e91ddb",
> "zones": [
> {
> "id": "96f5eca9-425b-4194-a152-86e310e91ddb",
> "name": "dc3",
> "endpoints": [
> ],
> "log_meta": "false",
> "log_data": "false",
> "bucket_index_max_shards": 11,
> "read_only": "false",
> "tier_type": "",
> "sync_from_all": "true",
> "sync_from": [],
> "redirect_zone": ""
> }
> ],
> "placement_targets": [
> {
> "name": "default-placement",
> "tags": [],
> "storage_classes": [
> "STANDARD"
> ]
> }
> ],
> "default_placement": "default-placement",
> "realm_id": "5d6f2ea4-b84a-459b-bce2-bccac338b3ef",
> "sync_policy": {
> "groups": []
> }
> },
> ...
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io