On Fri, Jun 7, 2019 at 11:35 PM Sergei Genchev <sgenc...@gmail.com> wrote:

>  Hi,
>  My OSD processes are constantly getting killed by OOM killer. My
> cluster has 5 servers, each with 18 spinning disks, running 18 OSD
> daemons in 48GB of memory.
>  I was trying to limit OSD cache, according to
> http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/
>
> [osd]
> bluestore_cache_size_ssd = 1G
> bluestore_cache_size_hdd = 768M
>

these options are no longer being used, set osd_memory_target (default: 4
GB) instead.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


>
> Yet, my OSDs are using way more memory than that. I have seen as high as
> 3.2G
>
> KiB Mem : 47877604 total,   310172 free, 45532752 used,  2034680 buff/cache
> KiB Swap:  2097148 total,        0 free,  2097148 used.   950224 avail Mem
>
>     PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+
> COMMAND
>  352516 ceph      20   0 3962504   2.8g   4164 S   2.3  6.1   4:22.98
> ceph-osd
>  350771 ceph      20   0 3668248   2.7g   4724 S   3.0  6.0   3:56.76
> ceph-osd
>  352777 ceph      20   0 3659204   2.7g   4672 S   1.7  5.9   4:10.52
> ceph-osd
>  353578 ceph      20   0 3589484   2.6g   4808 S   4.6  5.8   3:37.54
> ceph-osd
>  352280 ceph      20   0 3577104   2.6g   4704 S   5.9  5.7   3:44.58
> ceph-osd
>  350933 ceph      20   0 3421168   2.5g   4140 S   2.6  5.4   3:38.13
> ceph-osd
>  353678 ceph      20   0 3368664   2.4g   4804 S   4.0  5.3  12:47.12
> ceph-osd
>  350665 ceph      20   0 3364780   2.4g   4716 S   2.6  5.3   4:23.44
> ceph-osd
>  353101 ceph      20   0 3304288   2.4g   4676 S   4.3  5.2   3:16.53
> ceph-osd
>  .......
>
>
>  Is there any way for me to limit how much memory does OSD use?
> Thank you!
>
> ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic
> (stable)
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to