Hi Adam. Just tried to extend the hosts memory to 48gb, and it stopped
throwing the error, and set it to 3.something gb instead
Thank you so much for you time and explainations
On Tue, Apr 9, 2024 at 9:30 PM Adam King wrote:
> The same experiment with the mds daemons pulling 4GB instead of the
The same experiment with the mds daemons pulling 4GB instead of the 16GB,
and me fixing the starting total memory (I accidentally used the
memory_available_kb instead of memory_total_kb the first time) gives us
*DEBUGcephadm.autotune:autotune.py:35 Autotuning OSD memory
Hi Adam
Seems like the mds_cache_memory_limit both set globally through cephadm and
the hosts mds daemons are all set to approx. 4gb
root@my-ceph01:/# ceph config get mds mds_cache_memory_limit
4294967296
same if query the individual mds daemons running on my-ceph01, or any of
the other mds
Hi Adam
Let me just finish tucking in a devlish tyke here and i’ll get to it first
thing
tirs. 9. apr. 2024 kl. 18.09 skrev Adam King :
> I did end up writing a unit test to see what we calculated here, as well
> as adding a bunch of debug logging (haven't created a PR yet, but probably
>
I did end up writing a unit test to see what we calculated here, as well as
adding a bunch of debug logging (haven't created a PR yet, but probably
will). The total memory was set to (19858056 * 1024 * 0.7) (total memory
in bytes * the autotune target ratio) = 14234254540. What ended up getting
Hi Adam
No problem, i really appreciate your input :)
The memory stats returned are as follows
"memory_available_kb": 19858056,
"memory_free_kb": 277480,
"memory_total_kb": 32827840,
On Thu, Apr 4, 2024 at 10:14 PM Adam King wrote:
> Sorry to keep asking for more info, but can I also get
Sorry to keep asking for more info, but can I also get what `cephadm
gather-facts` on that host returns for "memory_total_kb". Might end up
creating a unit test out of this case if we have a calculation bug here.
On Thu, Apr 4, 2024 at 4:05 PM Mads Aasted wrote:
> sorry for the double send,
sorry for the double send, forgot to hit reply all so it would appear on
the page
Hi Adam
If we multiply by 0.7, and work through the previous example from that
number, we would still arrive at roughly 2.5 gb for each osd. And the host
in question is trying to set it to less than 500mb.
I have
I missed a step in the calculation. The total_memory_kb I mentioned
earlier is also multiplied by the value of the
mgr/cephadm/autotune_memory_target_ratio before doing the subtractions for
all the daemons. That value defaults to 0.7. That might explain it seeming
like it's getting a value lower
Hi Adam.
So doing the calculations with what you are stating here I arrive at a
total sum for all the listed processes at 13.3 (roughly) gb, for everything
except the osds, leaving well in excess of +4gb for each OSD.
Besides the mon daemon which i can tell on my host has a limit of 2gb ,
none of
For context, the value the autotune goes with takes the value from `cephadm
gather-facts` on the host (the "memory_total_kb" field) and then subtracts
from that per daemon on the host according to
min_size_by_type = {
'mds': 4096 * 1048576,
'mgr': 4096 * 1048576,
11 matches
Mail list logo