Re: [ceph-users] compacting omap doubles its size

2019-02-13 Thread David Turner
Sorry for the late response on this, but life has been really busy over the
holidays.

We compact our omaps offline with the ceph-kvstore-tool.  Here [1] is a
copy of the script that we use for our clusters.  You might need to modify
things a bit for your environment.  I don't remember which version this
functionality was added to ceph-kvstore-tool, but it exists in 12.2.4.  We
need to do this because our OSDs get marked out when they try to compact
their own omaps online.  We run this script monthly and then ad-hoc as we
find OSDs compacting their own omaps live.


[1] https://gist.github.com/drakonstein/4391c0b268a35b64d4f26a12e5058ba9

On Thu, Nov 29, 2018 at 6:15 PM Tomasz Płaza 
wrote:

> Hi,
>
> I have a ceph 12.2.8 cluster on filestore with rather large omap dirs
> (avg size is about 150G). Recently slow requests became a problem, so
> after some digging I decided to convert omap from leveldb to rocksdb.
> Conversion went fine and slow requests rate went down to acceptable
> level. Unfortunately  conversion did not shrink most of omap dirs, so I
> tried online compaction:
>
> Before compaction: 50G/var/lib/ceph/osd/ceph-0/current/omap/
>
> After compaction: 100G/var/lib/ceph/osd/ceph-0/current/omap/
>
> Purge and recreate: 1.5G /var/lib/ceph/osd/ceph-0/current/omap/
>
>
> Before compaction: 135G/var/lib/ceph/osd/ceph-5/current/omap/
>
> After compaction: 260G/var/lib/ceph/osd/ceph-5/current/omap/
>
> Purge and recreate: 2.5G /var/lib/ceph/osd/ceph-5/current/omap/
>
>
> For me compaction which makes omap bigger is quite weird and
> frustrating. Please help.
>
>
> P.S. My cluster suffered from ongoing index reshards (it is disabled
> now) and on many buckets with 4m+ objects I have a lot of old indexes:
>
> 634   bucket1
> 651   bucket2
>
> ...
> 1231 bucket17
> 1363 bucket18
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] compacting omap doubles its size

2018-11-28 Thread Tomasz Płaza

Hi,

I have a ceph 12.2.8 cluster on filestore with rather large omap dirs 
(avg size is about 150G). Recently slow requests became a problem, so 
after some digging I decided to convert omap from leveldb to rocksdb. 
Conversion went fine and slow requests rate went down to acceptable 
level. Unfortunately  conversion did not shrink most of omap dirs, so I 
tried online compaction:


Before compaction: 50G    /var/lib/ceph/osd/ceph-0/current/omap/

After compaction: 100G    /var/lib/ceph/osd/ceph-0/current/omap/

Purge and recreate: 1.5G /var/lib/ceph/osd/ceph-0/current/omap/


Before compaction: 135G    /var/lib/ceph/osd/ceph-5/current/omap/

After compaction: 260G    /var/lib/ceph/osd/ceph-5/current/omap/

Purge and recreate: 2.5G /var/lib/ceph/osd/ceph-5/current/omap/


For me compaction which makes omap bigger is quite weird and 
frustrating. Please help.



P.S. My cluster suffered from ongoing index reshards (it is disabled 
now) and on many buckets with 4m+ objects I have a lot of old indexes:


634   bucket1
651   bucket2

...
1231 bucket17
1363 bucket18


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com