Hi All,
I am testing running ceph luminous(12.2.1-249-g42172a4 
(42172a443183ffe6b36e85770e53fe678db293bf) on ARM server.
The ARM server has a two cores@1.4GHz cpu and 2GB ram and I am running 2 osd 
per ARM server with 2x8TB(or 2x10TB) hdd.
Now I am facing constantly oom problem.I have tried upgrade ceph(to fix osd 
memroy leak problem) and lower the bluestore  cache setting.The oom problems 
did get better but still occurs constantly.

I am hoping someone can gives me some advice of the follow questions.

Is it impossible to run ceph in this config of hardware or Is it possible I can 
do some tunning the solve this problem(even to lose some performance to avoid 
the oom problem)?

Is it a good idea to use raid0 to combine the 2 HDD into one so I can only run 
one osd to save some memory?

How is memory usage of osd related to the size of HDD?




PS:my ceph.conf bluestore cache setting
[osd]
        bluestore_cache_size = 104857600
        bluestore_cache_kv_max = 67108864
        osd client message size cap = 67108864



2017-12-10



lin.yunfan
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to