Casey,
I did fix this. Here is what I did:
1. Stopped write access to the bucket
2. After I stopped the writes:
# radosgw-admin bucket sync status --bucket
showed just the one shard that was behind, matching the shard number that has
all the extra 0_ index objects.
3. then did:
# radosgw-ad
Casey,
What I will probably do is:
1. stop usage of that bucket2. wait a few minutes to allow anything to
replicate, and verify object count, etc.
3. bilog trim
After #3 I will see if any of the '/' objects still exist.
Hopefully that will help. I now know what to look for to see if I can narro
On Thu, Sep 21, 2023 at 12:21 PM Christopher Durham wrote:
>
>
> Hi Casey,
>
> This is indeed a multisite setup. The other side shows that for
>
> # radosgw-admin sync status
>
> the oldest incremental change not applied is about a minute old, and that is
> consistent over a number of minutes, al
Hi Casey,
This is indeed a multisite setup. The other side shows that for
# radosgw-admin sync status
the oldest incremental change not applied is about a minute old, and that is
consistent over a number of minutes, always the oldest incremental change a
minute or two old.
However:
# radosgw-
these keys starting with "<80>0_" appear to be replication log entries
for multisite. can you confirm that this is a multisite setup? is the
'bucket sync status' mostly caught up on each zone? in a healthy
multisite configuration, these log entries would eventually get
trimmed automatically
On Wed