Hi Nicola,
I wrote the autotuning code in the OSD. Janne's response is absolutely
correct. Right now we just control the size of the caches in the OSD
and rocksdb to try to keep the OSD close to a certain memory limit. By
default this works down to around 2GB, but the smaller the limit,
Thank you for the detailed answer. I'll buy more RAM for the machine.
Nicola
On 03/10/22 10:02, Janne Johansson wrote:
There is definitely something wrong in how my cluster manages
osd_memory_target. For example, this is the situation for OSD 16:
The memory limit seems to be correctly set (I
> There is definitely something wrong in how my cluster manages
> osd_memory_target. For example, this is the situation for OSD 16:
> The memory limit seems to be correctly set (I disabled the memory
> autotune on the host, set the limit manually with --force and rebooted
> the host) but
Hi,
384MB is far too low for a Ceph OSD. The warning is telling you that
it's below the min.
Cheers, Dan
On Sun, Oct 2, 2022 at 11:10 AM Nicola Mori wrote:
>
> Dear Ceph users,
>
> I put together a cluster by reusing some (very) old machines with low
> amounts of RAM, as low as 4 GB for the
There is definitely something wrong in how my cluster manages
osd_memory_target. For example, this is the situation for OSD 16:
# ceph config get osd.6 osd_memory_target
280584038
# ceph orch ps ogion --daemon_type osd --daemon_id 16
NAMEHOST PORTS STATUS REFRESHED AGE MEM USE
I attach two files with the requested info. Some more details: the
cluster has been deployed with cephadm using Ceph 17.2.3 image tweaked
with a small addition (a custom module for alerts). I guess the
configuration is almost vanilla since I changed very few things.
Thanks for your help.
can you share `ceph daemon osd.8 config show` and `ceph config dump`?
On Sun, Oct 2, 2022 at 5:10 AM Nicola Mori wrote:
> Dear Ceph users,
>
> I put together a cluster by reusing some (very) old machines with low
> amounts of RAM, as low as 4 GB for the worst case. I'd need to set
>