Hi, we're running 15.2.7 and our cluster is warning us about LARGE_OMAP_OBJECTS 
(1 large omap objects).

Here is what the distribution looks like for the bucket in question, and as you 
can see all but 3 of the keys reside in shard 2.
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.0           1
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.8           0
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.9           0
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.7           0
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.1           0
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.4           0
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.3           1
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.2      262384
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.6           0
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.5           0
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.12          0
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.10          1
.dir.5a5c812a-3d31-4d79-87e6-1a17206228ac.18635192.221.11          0

osd_deep_scrub_large_omap_object_key_threshold is set to 200000 by default, 
hence the warning observed for this bucket.

Dynamic resharding is enabled, and the bucket is not in the process of being 
resharded.

Versioning not in use for this bucket, so we're not affected by 
https://tracker.ceph.com/issues/46456.

Can anyone help us understand why all the keys are getting mapped to a singe 
shard? Is there a bug here, or is this expected behaviour?

Could it be related to the fact that the bucket contains large multipart 
uploads? (Object names look like this:)
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5900
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5901
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5902
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5903
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5904
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5905
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5906
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5907
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5908
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~0W-YhP3F7qc70Ad8JoBIugKzu225qs2.5909
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~2uuwqny_HicO6kx_lPmWEf0zoyvdm_9.7152
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~2uuwqny_HicO6kx_lPmWEf0zoyvdm_9.7153
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~2uuwqny_HicO6kx_lPmWEf0zoyvdm_9.7154
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~2uuwqny_HicO6kx_lPmWEf0zoyvdm_9.7155
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~2uuwqny_HicO6kx_lPmWEf0zoyvdm_9.7156
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~2uuwqny_HicO6kx_lPmWEf0zoyvdm_9.7157
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~2uuwqny_HicO6kx_lPmWEf0zoyvdm_9.7158
_multipart_TOOTHROT/anonymised/TOOTHROT-DISK1-8c59002f-cffd-4f74-a680-147383ab8d78.vhdx.2~2uuwqny_HicO6kx_lPmWEf0zoyvdm_9.7159
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to