> 
> >
> >>
> >> What I also see is that I have three OSDs that have quite a lot of
> OMAP
> >> data, in compare to other OSDs (~20 time higher). I don't know if
> this
> >> is an issue:
> >
> > I have on 2TB ssd's with 2GB - 4GB omap data, while on 8TB hdd's the
> omap data is only 53MB - 100MB.
> > Should I manually clean this? (how? :))
> 
> The amount of omap data depends on multiple things, especially the use-
> case.  If a given OSD is only used for RBD, it will have a different
> omap experience than if it were used for an RGW index pool.
> 

This (mine) is mostly an rbd cluster. 

Is it correct that compacting leveldb is addressing 'cleaning omap data'? And 
this can only be done by setting leveldb_compact_on_mount = true in ceph.conf 
and restarting the osd?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to