[ceph-users] very high ram usage by OSDs on Nautilus

2019-10-30 Thread Philippe D'Anjou
Yes you were right, somehow there was an unusual high memory target set, not 
sure where this came from. I set it back to normal now, that should fix it I 
guess.
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] very high ram usage by OSDs on Nautilus

2019-10-29 Thread Mark Nelson

Ok, assuming my math is right you've got ~14G of data in the mempools.


~6.5GB bluestore data

~1.8GB bluestore onode

~5GB bluestore other


Rest is other misc stuff.  That seems to be pretty inline with the 
numbers you posted in your screenshot. IE this doesn't appear to be a 
leak, but rather the bluestore caches are all using significantly more 
data than is typical given the default 4GB osd_memory_target.  You can 
check what an OSD's memory target it set to via the config show command 
(I'm using the admin socket here but you don't have to):


ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep 
'"osd_memory_target"'

    "osd_memory_target": "4294967296",

Mark


On 10/29/19 8:07 AM, Philippe D'Anjou wrote:
Ok looking at mempool, what does it tell me? This affects multiple 
OSDs, got crashes almost every hour.


{
    "mempool": {
    "by_pool": {
    "bloom_filter": {
    "items": 0,
    "bytes": 0
    },
    "bluestore_alloc": {
    "items": 2545349,
    "bytes": 20362792
    },
    "bluestore_cache_data": {
    "items": 28759,
    "bytes": 6972870656
    },
    "bluestore_cache_onode": {
    "items": 2885255,
    "bytes": 1892727280
    },
    "bluestore_cache_other": {
    "items": 202831651,
    "bytes": 5403585971
    },
    "bluestore_fsck": {
    "items": 0,
    "bytes": 0
    },
    "bluestore_txc": {
    "items": 21,
    "bytes": 15792
    },
    "bluestore_writing_deferred": {
    "items": 77,
    "bytes": 7803168
    },
    "bluestore_writing": {
    "items": 4,
    "bytes": 5319827
    },
    "bluefs": {
    "items": 5242,
    "bytes": 175096
    },
    "buffer_anon": {
    "items": 726644,
    "bytes": 193214370
    },
    "buffer_meta": {
    "items": 754360,
    "bytes": 66383680
    },
    "osd": {
    "items": 29,
    "bytes": 377464
    },
    "osd_mapbl": {
    "items": 50,
    "bytes": 3492082
    },
    "osd_pglog": {
    "items": 99011,
    "bytes": 46170592
    },
    "osdmap": {
    "items": 48130,
    "bytes": 1151208
    },
    "osdmap_mapping": {
    "items": 0,
    "bytes": 0
    },
    "pgmap": {
    "items": 0,
    "bytes": 0
    },
    "mds_co": {
    "items": 0,
    "bytes": 0
    },
    "unittest_1": {
    "items": 0,
    "bytes": 0
    },
    "unittest_2": {
    "items": 0,
    "bytes": 0
    }
    },
    "total": {
    "items": 209924582,
    "bytes": 14613649978
    }
    }
}



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] very high ram usage by OSDs on Nautilus

2019-10-29 Thread Philippe D'Anjou
Ok looking at mempool, what does it tell me? This affects multiple OSDs, got 
crashes almost every hour.

{    "mempool": {
    "by_pool": {
    "bloom_filter": {
    "items": 0,
    "bytes": 0
    },
    "bluestore_alloc": {
    "items": 2545349,
    "bytes": 20362792
    },
    "bluestore_cache_data": {
    "items": 28759,
    "bytes": 6972870656
    },
    "bluestore_cache_onode": {
    "items": 2885255,
    "bytes": 1892727280
    },
    "bluestore_cache_other": {
    "items": 202831651,
    "bytes": 5403585971
    },
    "bluestore_fsck": {
    "items": 0,
    "bytes": 0
    },
    "bluestore_txc": {
    "items": 21,
    "bytes": 15792
    },
    "bluestore_writing_deferred": {
    "items": 77,
    "bytes": 7803168
    },
    "bluestore_writing": {
    "items": 4,
    "bytes": 5319827
    },
    "bluefs": {
    "items": 5242,
    "bytes": 175096
    },
    "buffer_anon": {
    "items": 726644,
    "bytes": 193214370
    },
    "buffer_meta": {
    "items": 754360,
    "bytes": 66383680
    },
    "osd": {
    "items": 29,
    "bytes": 377464
    },
    "osd_mapbl": {
    "items": 50,
    "bytes": 3492082
    },
    "osd_pglog": {
    "items": 99011,
    "bytes": 46170592
    },
    "osdmap": {
    "items": 48130,
    "bytes": 1151208
    },
    "osdmap_mapping": {
    "items": 0,
    "bytes": 0
    },
    "pgmap": {
    "items": 0,
    "bytes": 0
    },
    "mds_co": {
    "items": 0,
    "bytes": 0
    },
    "unittest_1": {
    "items": 0,
    "bytes": 0
    },
    "unittest_2": {
    "items": 0,
    "bytes": 0
    }
    },
    "total": {
    "items": 209924582,
    "bytes": 14613649978
    }
    }
}


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] very high ram usage by OSDs on Nautilus

2019-10-28 Thread Mark Nelson

Hi Philippe,


Have you looked at the mempool stats yet?


ceph daemon osd.NNN dump_mempools


You may also want to look at the heap stats, and potentially enable 
debug 5 for bluestore to see what the priority cache manager is doing.  
Typically in these cases we end up seeing a ton of memory used by 
something and the priority cache manager is trying to compensate by 
shrinking the caches, but you won't really know until you start looking 
at the various statistics and logging.



Mark


On 10/28/19 2:54 AM, Philippe D'Anjou wrote:

Hi,

we are seeing quite a high memory usage by OSDs since Nautilus. 
Averaging 10GB/OSD for 10TB HDDs. But I had OOM issues on 128GB 
Systems because some single OSD processes used up to 32%.


Here an example how they look on average: https://i.imgur.com/kXCtxMe.png

Is that normal? I never seen this on luminous. Memory leaks?
Using all default values, OSDs have no special configuration. Use case 
is cephfs.


v14.2.4 on Ubuntu 18.04 LTS

Seems a bit high?

Thanks for help

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] very high ram usage by OSDs on Nautilus

2019-10-28 Thread Philippe D'Anjou
Hi,
we are seeing quite a high memory usage by OSDs since Nautilus. Averaging 
10GB/OSD for 10TB HDDs. But I had OOM issues on 128GB Systems because some 
single OSD processes used up to 32%.
Here an example how they look on average: https://i.imgur.com/kXCtxMe.png
Is that normal? I never seen this on luminous. Memory leaks?Using all default 
values, OSDs have no special configuration. Use case is cephfs.

v14.2.4 on Ubuntu 18.04 LTS
Seems a bit high?
Thanks for help
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com