Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

2019-05-21 Thread mr. non non
Hi,

Thank you so much for sharing your case.
2 weeks ago, one of my users purged old swift objects with custom script 
manually but didn't use object expiry feature. This might be the case.
I will leave heath_warn message if it has no impact.

Regards,
Arnondh


From: Magnus Grönlund 
Sent: Tuesday, May 21, 2019 3:23 PM
To: mr. non non
Cc: EDH - Manuel Rios Fernandez; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Large OMAP Objects in default.rgw.log pool



Den tis 21 maj 2019 kl 02:12 skrev mr. non non 
mailto:arnon...@hotmail.com>>:
Does anyone have  this issue before? As research, many people have issue with 
rgw.index which related to small small number of index sharding (too many 
objects per index).
I also check on this thread 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html but 
don't found any clues because number of data objects is below 100k per index 
and size of objects in rgw.log is 0.

Hi,

I've had the same issue with a large omap object in the default.rgw.log pool 
for a long time, but after determining that it wasn't a big issue for us and 
the number of omap keys wasn't growing, I haven't actively tried to find a 
solution.
The cluster was running Jewel for over a year and it was during this time the 
omap entries in default.rgw.log where created, but (of cause) the warning 
showed up after the upgrade to Luminous.
#ceph health detail
…
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'default.rgw.log'
…

#zgrep “Large” ceph-osd.324.log-20190516.gz
2019-05-15 23:14:48.821612 7fa17cf74700  0 log_channel(cluster) log [WRN] : 
Large omap object found. Object: 
48:8b8fff66:::meta.log.128a376b-8807-4e19-9ddc-8220fd50d7c1.41:head Key count: 
2844494 Size (bytes): 682777168
#

The reply from Pavan Rallabhandi (below) in the previous thread might be useful 
but I’m not familiar enough with rgw to tell or figure out how that could help 
resolv the issue.
>That can happen if you have lot of objects with swift object expiry (TTL) 
>enabled. You can 'listomapkeys' on these log pool objects and check for the 
>objects that have registered for TTL as omap entries. I know this is the case 
>with at least Jewel version
>
>Thanks,
>-Pavan.

Regards
/Magnus


Thanks.

From: ceph-users 
mailto:ceph-users-boun...@lists.ceph.com>> 
on behalf of mr. non non mailto:arnon...@hotmail.com>>
Sent: Monday, May 20, 2019 7:32 PM
To: EDH - Manuel Rios Fernandez; 
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

Hi Manuel,

I use version 12.2.8 with bluestore and also use manually index sharding 
(configured to 100).  As I checked, no buckets reach 100k of objects_per_shard.
here are health status and cluster log

# ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'default.rgw.log'
Search the cluster log for 'Large omap object found' for more details.

# cat ceph.log | tail -2
2019-05-19 17:49:36.306481 mon.MONNODE1 mon.0 
10.118.191.231:6789/0<http://10.118.191.231:6789/0> 528758 : cluster [WRN] 
Health check failed: 1 large omap objects (LARGE_OMAP_OBJECTS)
2019-05-19 17:49:34.535543 osd.38 osd.38 MONNODE1_IP:6808/3514427 12 : cluster 
[WRN] Large omap object found. Object: 4:b172cd59:usage::usage.26:head Key 
count: 8720830 Size (bytes): 1647024346

All objects size are 0.
$  for i in `rados ls -p default.rgw.log`; do rados stat -p default.rgw.log 
${i};done  | more
default.rgw.log/obj_delete_at_hint.78 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/meta.history mtime 2019-05-20 19:19:40.00, size 50
default.rgw.log/obj_delete_at_hint.70 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.000104 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.26 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.28 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.40 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.15 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.69 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.95 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.03 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.47 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.35 mtime 2019-05-20 19:31:45.00, 
size 0


Please kindly advise how to remove health_warn message.

Many thanks.
Arnondh



Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

2019-05-21 Thread Magnus Grönlund
Den tis 21 maj 2019 kl 02:12 skrev mr. non non :

> Does anyone have  this issue before? As research, many people have issue
> with rgw.index which related to small small number of index sharding (too
> many objects per index).
> I also check on this thread
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html but
> don't found any clues because number of data objects is below 100k per
> index and size of objects in rgw.log is 0.
>
> Hi,

I've had the same issue with a large omap object in the default.rgw.log
pool for a long time, but after determining that it wasn't a big issue for
us and the number of omap keys wasn't growing, I haven't actively tried to
find a solution.
The cluster was running Jewel for over a year and it was during this time
the omap entries in default.rgw.log where created, but (of cause) the
warning showed up after the upgrade to Luminous.
#ceph health detail
…
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'default.rgw.log'
…

#zgrep “Large” ceph-osd.324.log-20190516.gz
2019-05-15 23:14:48.821612 7fa17cf74700  0 log_channel(cluster) log [WRN] :
Large omap object found. Object:
48:8b8fff66:::meta.log.128a376b-8807-4e19-9ddc-8220fd50d7c1.41:head Key
count: 2844494 Size (bytes): 682777168
#

The reply from Pavan Rallabhandi (below) in the previous thread might be
useful but I’m not familiar enough with rgw to tell or figure out how that
could help resolv the issue.
>That can happen if you have lot of objects with swift object expiry (TTL)
enabled. You can 'listomapkeys' on these log pool objects and check for the
objects that have registered for TTL as omap entries. I know this is the
case with at least Jewel version
>
>Thanks,
>-Pavan.

Regards
/Magnus



> Thanks.
> --
> *From:* ceph-users  on behalf of mr.
> non non 
> *Sent:* Monday, May 20, 2019 7:32 PM
> *To:* EDH - Manuel Rios Fernandez; ceph-users@lists.ceph.com
> *Subject:* Re: [ceph-users] Large OMAP Objects in default.rgw.log pool
>
> Hi Manuel,
>
> I use version 12.2.8 with bluestore and also use manually index sharding
> (configured to 100).  As I checked, no buckets reach 100k of
> objects_per_shard.
> here are health status and cluster log
>
> # ceph health detail
> HEALTH_WARN 1 large omap objects
> LARGE_OMAP_OBJECTS 1 large omap objects
> 1 large objects found in pool 'default.rgw.log'
> Search the cluster log for 'Large omap object found' for more details.
>
> # cat ceph.log | tail -2
> 2019-05-19 17:49:36.306481 mon.MONNODE1 mon.0 10.118.191.231:6789/0
> 528758 : cluster [WRN] Health check failed: 1 large omap objects
> (LARGE_OMAP_OBJECTS)
> 2019-05-19 17:49:34.535543 osd.38 osd.38 MONNODE1_IP:6808/3514427 12 :
> cluster [WRN] Large omap object found. Object:
> 4:b172cd59:usage::usage.26:head Key count: 8720830 Size (bytes): 1647024346
>
> All objects size are 0.
> $  for i in `rados ls -p default.rgw.log`; do rados stat -p
> default.rgw.log ${i};done  | more
> default.rgw.log/obj_delete_at_hint.78 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/meta.history mtime 2019-05-20 19:19:40.00, size 50
> default.rgw.log/obj_delete_at_hint.70 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.000104 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.26 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.28 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.40 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.15 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.69 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.95 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.03 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.47 mtime 2019-05-20
> 19:31:45.00, size 0
> default.rgw.log/obj_delete_at_hint.35 mtime 2019-05-20
> 19:31:45.00, size 0
>
>
> Please kindly advise how to remove health_warn message.
>
> Many thanks.
> Arnondh
>
> --
> *From:* EDH - Manuel Rios Fernandez 
> *Sent:* Monday, May 20, 2019 5:41 PM
> *To:* 'mr. non non'; ceph-users@lists.ceph.com
> *Subject:* RE: [ceph-users] Large OMAP Objects in default.rgw.log pool
>
>
> Hi Arnondh,
>
>
>
> Whats your ceph version?
>
>
>
> Regards
>
>
>
>
>
> *De:* ceph-users  *En nombre de *m

Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

2019-05-20 Thread mr. non non
Does anyone have  this issue before? As research, many people have issue with 
rgw.index which related to small small number of index sharding (too many 
objects per index).
I also check on this thread 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html but 
don't found any clues because number of data objects is below 100k per index 
and size of objects in rgw.log is 0.

Thanks.

From: ceph-users  on behalf of mr. non non 

Sent: Monday, May 20, 2019 7:32 PM
To: EDH - Manuel Rios Fernandez; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

Hi Manuel,

I use version 12.2.8 with bluestore and also use manually index sharding 
(configured to 100).  As I checked, no buckets reach 100k of objects_per_shard.
here are health status and cluster log

# ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'default.rgw.log'
Search the cluster log for 'Large omap object found' for more details.

# cat ceph.log | tail -2
2019-05-19 17:49:36.306481 mon.MONNODE1 mon.0 10.118.191.231:6789/0 528758 : 
cluster [WRN] Health check failed: 1 large omap objects (LARGE_OMAP_OBJECTS)
2019-05-19 17:49:34.535543 osd.38 osd.38 MONNODE1_IP:6808/3514427 12 : cluster 
[WRN] Large omap object found. Object: 4:b172cd59:usage::usage.26:head Key 
count: 8720830 Size (bytes): 1647024346

All objects size are 0.
$  for i in `rados ls -p default.rgw.log`; do rados stat -p default.rgw.log 
${i};done  | more
default.rgw.log/obj_delete_at_hint.78 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/meta.history mtime 2019-05-20 19:19:40.00, size 50
default.rgw.log/obj_delete_at_hint.70 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.000104 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.26 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.28 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.40 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.15 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.69 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.95 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.03 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.47 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.35 mtime 2019-05-20 19:31:45.00, 
size 0


Please kindly advise how to remove health_warn message.

Many thanks.
Arnondh


From: EDH - Manuel Rios Fernandez 
Sent: Monday, May 20, 2019 5:41 PM
To: 'mr. non non'; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Large OMAP Objects in default.rgw.log pool


Hi Arnondh,



Whats your ceph version?



Regards





De: ceph-users  En nombre de mr. non non
Enviado el: lunes, 20 de mayo de 2019 12:39
Para: ceph-users@lists.ceph.com
Asunto: [ceph-users] Large OMAP Objects in default.rgw.log pool



Hi,



I found the same issue like above.

Does anyone know how to fix it?



Thanks.

Arnondh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

2019-05-20 Thread mr. non non
Hi Manuel,

I use version 12.2.8 with bluestore and also use manually index sharding 
(configured to 100).  As I checked, no buckets reach 100k of objects_per_shard.
here are health status and cluster log

# ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'default.rgw.log'
Search the cluster log for 'Large omap object found' for more details.

# cat ceph.log | tail -2
2019-05-19 17:49:36.306481 mon.MONNODE1 mon.0 10.118.191.231:6789/0 528758 : 
cluster [WRN] Health check failed: 1 large omap objects (LARGE_OMAP_OBJECTS)
2019-05-19 17:49:34.535543 osd.38 osd.38 MONNODE1_IP:6808/3514427 12 : cluster 
[WRN] Large omap object found. Object: 4:b172cd59:usage::usage.26:head Key 
count: 8720830 Size (bytes): 1647024346

All objects size are 0.
$  for i in `rados ls -p default.rgw.log`; do rados stat -p default.rgw.log 
${i};done  | more
default.rgw.log/obj_delete_at_hint.78 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/meta.history mtime 2019-05-20 19:19:40.00, size 50
default.rgw.log/obj_delete_at_hint.70 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.000104 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.26 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.28 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.40 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.15 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.69 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.95 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.03 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.47 mtime 2019-05-20 19:31:45.00, 
size 0
default.rgw.log/obj_delete_at_hint.35 mtime 2019-05-20 19:31:45.00, 
size 0


Please kindly advise how to remove health_warn message.

Many thanks.
Arnondh


From: EDH - Manuel Rios Fernandez 
Sent: Monday, May 20, 2019 5:41 PM
To: 'mr. non non'; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Large OMAP Objects in default.rgw.log pool


Hi Arnondh,



Whats your ceph version?



Regards





De: ceph-users  En nombre de mr. non non
Enviado el: lunes, 20 de mayo de 2019 12:39
Para: ceph-users@lists.ceph.com
Asunto: [ceph-users] Large OMAP Objects in default.rgw.log pool



Hi,



I found the same issue like above.

Does anyone know how to fix it?



Thanks.

Arnondh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

2019-05-20 Thread EDH - Manuel Rios Fernandez
Hi Arnondh,

 

Whats your ceph version?

 

Regards

 

 

De: ceph-users  En nombre de mr. non non
Enviado el: lunes, 20 de mayo de 2019 12:39
Para: ceph-users@lists.ceph.com
Asunto: [ceph-users] Large OMAP Objects in default.rgw.log pool

 

Hi,

 

I found the same issue like above. 

Does anyone know how to fix it?

 

Thanks.

Arnondh

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

2019-03-09 Thread Pavan Rallabhandi
That can happen if you have lot of objects with swift object expiry (TTL) 
enabled. You can 'listomapkeys' on these log pool objects and check for the 
objects that have registered for TTL as omap entries. I know this is the case 
with at least Jewel version.

Thanks,
-Pavan.

On 3/7/19, 10:09 PM, "ceph-users on behalf of Brad Hubbard" 
 wrote:

On Fri, Mar 8, 2019 at 4:46 AM Samuel Taylor Liston  
wrote:
>
> Hello All,
> I have recently had 32 large map objects appear in my 
default.rgw.log pool.  Running luminous 12.2.8.
>
> Not sure what to think about these.I’ve done a lot of reading 
about how when these normally occur it is related to a bucket needing 
resharding, but it doesn’t look like my default.rgw.log pool  has anything in 
it, let alone buckets.  Here’s some info on the system:
>
> [root@elm-rgw01 ~]# ceph versions
> {
> "mon": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
luminous (stable)": 5
> },
> "mgr": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
luminous (stable)": 1
> },
> "osd": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
luminous (stable)": 192
> },
> "mds": {},
> "rgw": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
luminous (stable)": 1
> },
> "overall": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
luminous (stable)": 199
> }
> }
> [root@elm-rgw01 ~]# ceph osd pool ls
> .rgw.root
> default.rgw.control
> default.rgw.meta
> default.rgw.log
> default.rgw.buckets.index
> default.rgw.buckets.non-ec
> default.rgw.buckets.data
> [root@elm-rgw01 ~]# ceph health detail
> HEALTH_WARN 32 large omap objects
> LARGE_OMAP_OBJECTS 32 large omap objects
> 32 large objects found in pool 'default.rgw.log'
> Search the cluster log for 'Large omap object found' for more 
details.—
>
> Looking closer at these object they are all of size 0.  Also that pool 
shows a capacity usage of 0:

The size here relates to data size. Object map (omap) data is metadata
so an object of size 0 can have considerable omap data associated with
it (the omap data is stored separately from the object in a key/value
database). The large omap warning in health detail output should tell
you " "Search the cluster log for 'Large omap object found' for more
details." If you do that you should get the names of the specific
objects involved. You can then use the rados commands listomapkeys and
listomapvals to see the specifics of the omap data. Someone more
familiar with rgw can then probably help you out on what purpose they
serve.

HTH.

>
> (just a sampling of the 236 objects at size 0)
>
> [root@elm-mon01 ceph]# for i in `rados ls -p default.rgw.log`; do echo 
${i}; rados stat -p default.rgw.log ${i};done
> obj_delete_at_hint.78
> default.rgw.log/obj_delete_at_hint.78 mtime 2019-03-07 
11:39:19.00, size 0
> obj_delete_at_hint.70
> default.rgw.log/obj_delete_at_hint.70 mtime 2019-03-07 
11:39:19.00, size 0
> obj_delete_at_hint.000104
> default.rgw.log/obj_delete_at_hint.000104 mtime 2019-03-07 
11:39:20.00, size 0
> obj_delete_at_hint.26
> default.rgw.log/obj_delete_at_hint.26 mtime 2019-03-07 
11:39:19.00, size 0
> obj_delete_at_hint.28
> default.rgw.log/obj_delete_at_hint.28 mtime 2019-03-07 
11:39:19.00, size 0
> obj_delete_at_hint.40
> default.rgw.log/obj_delete_at_hint.40 mtime 2019-03-07 
11:39:19.00, size 0
> obj_delete_at_hint.15
> default.rgw.log/obj_delete_at_hint.15 mtime 2019-03-07 
11:39:19.00, size 0
> obj_delete_at_hint.69
> default.rgw.log/obj_delete_at_hint.69 mtime 2019-03-07 
11:39:19.00, size 0
> obj_delete_at_hint.95
> default.rgw.log/obj_delete_at_hint.95 mtime 2019-03-07 
11:39:19.00, size 0
> obj_delete_at_hint.03
> default.rgw.log/obj_delete_at_hint.03 mtime 2019-03-07 
11:39:19.00, size 0
> obj_delete_at_hint.47
> default.rgw.log/obj_delete_at_hint.47 mtime 2019-03-07 
11:39:19.00, size 0
>
>
> [root@elm-mon01 ceph]# rados df
> POOL_NAME  USEDOBJECTS   CLONES COPIES 
MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPSRD  WR_OPSWR
> .rgw.root  1.09KiB 4  0 12
  0   00 14853 9.67MiB 0 0B
> default.rgw.buckets.data444TiB 166829939  0 1000979634
  0   00 362357590  859TiB 909188749 703TiB
  

Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

2019-03-07 Thread Brad Hubbard
On Fri, Mar 8, 2019 at 4:46 AM Samuel Taylor Liston  wrote:
>
> Hello All,
> I have recently had 32 large map objects appear in my default.rgw.log 
> pool.  Running luminous 12.2.8.
>
> Not sure what to think about these.I’ve done a lot of reading 
> about how when these normally occur it is related to a bucket needing 
> resharding, but it doesn’t look like my default.rgw.log pool  has anything in 
> it, let alone buckets.  Here’s some info on the system:
>
> [root@elm-rgw01 ~]# ceph versions
> {
> "mon": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
> luminous (stable)": 5
> },
> "mgr": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
> luminous (stable)": 1
> },
> "osd": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
> luminous (stable)": 192
> },
> "mds": {},
> "rgw": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
> luminous (stable)": 1
> },
> "overall": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) 
> luminous (stable)": 199
> }
> }
> [root@elm-rgw01 ~]# ceph osd pool ls
> .rgw.root
> default.rgw.control
> default.rgw.meta
> default.rgw.log
> default.rgw.buckets.index
> default.rgw.buckets.non-ec
> default.rgw.buckets.data
> [root@elm-rgw01 ~]# ceph health detail
> HEALTH_WARN 32 large omap objects
> LARGE_OMAP_OBJECTS 32 large omap objects
> 32 large objects found in pool 'default.rgw.log'
> Search the cluster log for 'Large omap object found' for more details.—
>
> Looking closer at these object they are all of size 0.  Also that pool shows 
> a capacity usage of 0:

The size here relates to data size. Object map (omap) data is metadata
so an object of size 0 can have considerable omap data associated with
it (the omap data is stored separately from the object in a key/value
database). The large omap warning in health detail output should tell
you " "Search the cluster log for 'Large omap object found' for more
details." If you do that you should get the names of the specific
objects involved. You can then use the rados commands listomapkeys and
listomapvals to see the specifics of the omap data. Someone more
familiar with rgw can then probably help you out on what purpose they
serve.

HTH.

>
> (just a sampling of the 236 objects at size 0)
>
> [root@elm-mon01 ceph]# for i in `rados ls -p default.rgw.log`; do echo ${i}; 
> rados stat -p default.rgw.log ${i};done
> obj_delete_at_hint.78
> default.rgw.log/obj_delete_at_hint.78 mtime 2019-03-07 
> 11:39:19.00, size 0
> obj_delete_at_hint.70
> default.rgw.log/obj_delete_at_hint.70 mtime 2019-03-07 
> 11:39:19.00, size 0
> obj_delete_at_hint.000104
> default.rgw.log/obj_delete_at_hint.000104 mtime 2019-03-07 
> 11:39:20.00, size 0
> obj_delete_at_hint.26
> default.rgw.log/obj_delete_at_hint.26 mtime 2019-03-07 
> 11:39:19.00, size 0
> obj_delete_at_hint.28
> default.rgw.log/obj_delete_at_hint.28 mtime 2019-03-07 
> 11:39:19.00, size 0
> obj_delete_at_hint.40
> default.rgw.log/obj_delete_at_hint.40 mtime 2019-03-07 
> 11:39:19.00, size 0
> obj_delete_at_hint.15
> default.rgw.log/obj_delete_at_hint.15 mtime 2019-03-07 
> 11:39:19.00, size 0
> obj_delete_at_hint.69
> default.rgw.log/obj_delete_at_hint.69 mtime 2019-03-07 
> 11:39:19.00, size 0
> obj_delete_at_hint.95
> default.rgw.log/obj_delete_at_hint.95 mtime 2019-03-07 
> 11:39:19.00, size 0
> obj_delete_at_hint.03
> default.rgw.log/obj_delete_at_hint.03 mtime 2019-03-07 
> 11:39:19.00, size 0
> obj_delete_at_hint.47
> default.rgw.log/obj_delete_at_hint.47 mtime 2019-03-07 
> 11:39:19.00, size 0
>
>
> [root@elm-mon01 ceph]# rados df
> POOL_NAME  USEDOBJECTS   CLONES COPIES 
> MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPSRD  WR_OPSWR
> .rgw.root  1.09KiB 4  0 12
>   0   00 14853 9.67MiB 0 0B
> default.rgw.buckets.data444TiB 166829939  0 1000979634
>   0   00 362357590  859TiB 909188749 703TiB
> default.rgw.buckets.index   0B   358  0   1074
>   0   00 729694496 1.04TiB 522654976 0B
> default.rgw.buckets.non-ec  0B   182  0546
>   0   00 194204616  148GiB  97962607 0B
> default.rgw.control 0B 8  0 24
>   0   00 0  0B 0 0B
> default.rgw.log 0B   236  0708
>   0   00  33268863 3.01TiB  18415356 0B
> default.rgw.meta   16.2KiB67  0201
>   0   0