0 12.507318496704102 62305
```
Thanks,
Yuji
From: Yuji Ito (伊藤 祐司)
Sent: Tuesday, October 25, 2022 10:33
To: Konstantin Shalygin
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: How to remove remaining bucket index shard objects
Hi,
The lar
0
6.7f 0 0 0 0
```
Thanks,
Yuji
From: Konstantin Shalygin
Sent: Wednesday, October 19, 2022 16:42
To: Yuji Ito (伊藤 祐司)
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] How to remove remaining bucket index shard objects
This strange stats, at le
This strange stats, at least one object should be exists for this OMAP's. Try
to deep-scrub this PG, try to list objects in this PG `rados ls --pgid 6.2`
k
Sent from my iPhone
> On 18 Oct 2022, at 03:39, Yuji Ito wrote:
>
> Thank you for your reply.
>
>> the object need only for OMAP
Thank you for your reply.
> the object need only for OMAP data, not for actual data.
I believe so. However, OMAP is set for an object, so I think that at least one
object exists in PG. Below it appears that the OMAP exists even though the
object does not exist. I feel it is strange. Is this
Hi,
What you mean "strange"? It is normal, the object need only for OMAP data, not
for actual data. Is only key for k,v database
I see that you have lower number of objects, some of your PG don't have data at
all. I suggest check your buckets for properly resharding process
How this look (the
o
Subject: Re: [ceph-users] How to remove remaining bucket index shard objects
Hi,
On 4 Oct 2022, at 03:36, Yuji Ito (伊藤 祐司) wrote:
After removing the index objects, I ran deep-scrub for all PGs of the index
pool. However, the problem wasn't resolved.
Seems you just have large OMAPs, n
Hi,
Thank you for your reply. Yesterday I ran compaction according to the following
RedHat document (and deep scrub again).
ref. https://access.redhat.com/solutions/5173092
The large omap objects warning in this time looks to be resolved. However,
based on our observations so far, it could
Hi,
> On 4 Oct 2022, at 03:36, Yuji Ito (伊藤 祐司) wrote:
>
> After removing the index objects, I ran deep-scrub for all PGs of the index
> pool. However, the problem wasn't resolved.
Seems you just have large OMAPs, not because 'bogus shard' objects. Try to look
PG stats with
Hi,
> Try to deep-scrub all PG's of your index pool
After removing the index objects, I ran deep-scrub for all PGs of the index
pool. However, the problem wasn't resolved.
Thanks,
Yuji
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Hi,
Try to deep-scrub all PG's of your index pool
k
Sent from my iPhone
> On 3 Oct 2022, at 03:41, Yuji Ito wrote:
> Would you have any idea how to resolve this condition?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hi,
By deleting orphaned bucket index shards objects, the `large omap objects`
warning disappeared temporarily but appeared again the next day.
```
ceph health detail
HEALTH_WARN 20 large omap objects
[WRN] LARGE_OMAP_OBJECTS: 20 large omap objects
20 large objects found in pool
Hi, Eric
Thank you for your reply.
> I don’t believe there is any tooling to find and clean orphaned bucket index
> shards. So if you’re certain they’re no longer needed, you can use `rados`
> commands to remove the objects.
I'll delete the bucket index shard object using the rados command as
I don’t believe there is any tooling to find and clean orphaned bucket index
shards. So if you’re certain they’re no longer needed, you can use `rados`
commands to remove the objects.
Eric
(he/him)
> On Sep 27, 2022, at 2:37 AM, Yuji Ito (伊藤 祐司) wrote:
>
> Hi,
>
> I have encountered a
13 matches
Mail list logo