Hi,

Thank you so much for sharing your case.
2 weeks ago, one of my users purged old swift objects with custom script 
manually but didn't use object expiry feature. This might be the case.
I will leave heath_warn message if it has no impact.

Regards,
Arnondh

________________________________
From: Magnus Grönlund <mag...@gronlund.se>
Sent: Tuesday, May 21, 2019 3:23 PM
To: mr. non non
Cc: EDH - Manuel Rios Fernandez; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Large OMAP Objects in default.rgw.log pool



Den tis 21 maj 2019 kl 02:12 skrev mr. non non 
<arnon...@hotmail.com<mailto:arnon...@hotmail.com>>:
Does anyone have  this issue before? As research, many people have issue with 
rgw.index which related to small small number of index sharding (too many 
objects per index).
I also check on this thread 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html but 
don't found any clues because number of data objects is below 100k per index 
and size of objects in rgw.log is 0.

Hi,

I've had the same issue with a large omap object in the default.rgw.log pool 
for a long time, but after determining that it wasn't a big issue for us and 
the number of omap keys wasn't growing, I haven't actively tried to find a 
solution.
The cluster was running Jewel for over a year and it was during this time the 
omap entries in default.rgw.log where created, but (of cause) the warning 
showed up after the upgrade to Luminous.
#ceph health detail
…
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
    1 large objects found in pool 'default.rgw.log'
…

#zgrep “Large” ceph-osd.324.log-20190516.gz
2019-05-15 23:14:48.821612 7fa17cf74700  0 log_channel(cluster) log [WRN] : 
Large omap object found. Object: 
48:8b8fff66:::meta.log.128a376b-8807-4e19-9ddc-8220fd50d7c1.41:head Key count: 
2844494 Size (bytes): 682777168
#

The reply from Pavan Rallabhandi (below) in the previous thread might be useful 
but I’m not familiar enough with rgw to tell or figure out how that could help 
resolv the issue.
>That can happen if you have lot of objects with swift object expiry (TTL) 
>enabled. You can 'listomapkeys' on these log pool objects and check for the 
>objects that have registered for TTL as omap entries. I know this is the case 
>with at least Jewel version
>
>Thanks,
>-Pavan.

Regards
/Magnus


Thanks.
________________________________
From: ceph-users 
<ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>> 
on behalf of mr. non non <arnon...@hotmail.com<mailto:arnon...@hotmail.com>>
Sent: Monday, May 20, 2019 7:32 PM
To: EDH - Manuel Rios Fernandez; 
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

Hi Manuel,

I use version 12.2.8 with bluestore and also use manually index sharding 
(configured to 100).  As I checked, no buckets reach 100k of objects_per_shard.
here are health status and cluster log

# ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
    1 large objects found in pool 'default.rgw.log'
    Search the cluster log for 'Large omap object found' for more details.

# cat ceph.log | tail -2
2019-05-19 17:49:36.306481 mon.MONNODE1 mon.0 
10.118.191.231:6789/0<http://10.118.191.231:6789/0> 528758 : cluster [WRN] 
Health check failed: 1 large omap objects (LARGE_OMAP_OBJECTS)
2019-05-19 17:49:34.535543 osd.38 osd.38 MONNODE1_IP:6808/3514427 12 : cluster 
[WRN] Large omap object found. Object: 4:b172cd59:usage::usage.26:head Key 
count: 8720830 Size (bytes): 1647024346

All objects size are 0.
$  for i in `rados ls -p default.rgw.log`; do rados stat -p default.rgw.log 
${i};done  | more
default.rgw.log/obj_delete_at_hint.0000000078 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/meta.history mtime 2019-05-20 19:19:40.000000, size 50
default.rgw.log/obj_delete_at_hint.0000000070 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000104 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000026 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000028 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000040 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000015 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000069 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000095 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000003 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000047 mtime 2019-05-20 19:31:45.000000, 
size 0
default.rgw.log/obj_delete_at_hint.0000000035 mtime 2019-05-20 19:31:45.000000, 
size 0


Please kindly advise how to remove health_warn message.

Many thanks.
Arnondh

________________________________
From: EDH - Manuel Rios Fernandez 
<mrios...@easydatahost.com<mailto:mrios...@easydatahost.com>>
Sent: Monday, May 20, 2019 5:41 PM
To: 'mr. non non'; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: RE: [ceph-users] Large OMAP Objects in default.rgw.log pool


Hi Arnondh,



Whats your ceph version?



Regards





De: ceph-users 
<ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>> 
En nombre de mr. non non
Enviado el: lunes, 20 de mayo de 2019 12:39
Para: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Asunto: [ceph-users] Large OMAP Objects in default.rgw.log pool



Hi,



I found the same issue like above.

Does anyone know how to fix it?



Thanks.

Arnondh

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to