Hi,
I created a tracker issue: https://tracker.ceph.com/issues/57115
Thanks,
Eugen
Zitat von Dhairya Parmar :
Hi there,
This thread contains some really insightful information. Thanks Eugen for
sharing the explanation by the SUSE team. Definitely the doc can be updated
with this, it might
Hi,
This thread contains some really insightful information. Thanks Eugen for
sharing the explanation by the SUSE team. Definitely the doc can be updated
with this, it might help a lot of people indeed.
Can you help creating a tracker for this? I wish to add the info to doc and
push a PR for
Hi there,
This thread contains some really insightful information. Thanks Eugen for
sharing the explanation by the SUSE team. Definitely the doc can be updated
with this, it might help a lot of people indeed.
Can you help creating a tracker for this? I wish to add the info to doc and
push a PR
Hello Eugen,
thank you very much for the full explanation.
This fixed our cluster and I am sure this helps a lot of people around
the world since this is a problem occuring everywhere.
I think this should be added to the documentation:
Hi,
did you have some success with modifying the mentioned values?
yes, the SUSE team helped identifying the issue, I can share the explanation:
---snip---
Every second (mds_cache_trim_interval config param) the mds is running
"cache trim" procedure. One of the steps of this procedure is
Hello Eugen,
did you have some success with modifying the mentioned values?
Or some others from:
https://docs.ceph.com/en/latest/cephfs/cache-configuration/
Best,
Malte
Am 15.06.22 um 14:12 schrieb Eugen Block:
Hi *,
I finally caught some debug logs during the cache pressure warnings. In
Hi *,
I finally caught some debug logs during the cache pressure warnings.
In the meantime I had doubled the mds_cache_memory_limit to 128 GB
which decreased the number cache pressure messages significantly, but
they still appear a few times per day.
Turning on debug logs for a few