Yeah, is also my thread. This thread was created before lower the cache
size from 512Mb to 8Mb. I thought that maybe was my fault and I did a
misconfiguration, so I've ignored the problem until now.

Greetings!

El mar., 24 jul. 2018 1:00, Gregory Farnum <gfar...@redhat.com> escribió:

> On Mon, Jul 23, 2018 at 11:08 AM Patrick Donnelly <pdonn...@redhat.com>
> wrote:
>
>> On Mon, Jul 23, 2018 at 5:48 AM, Daniel Carrasco <d.carra...@i2tic.com>
>> wrote:
>> > Hi, thanks for your response.
>> >
>> > Clients are about 6, and 4 of them are the most of time on standby.
>> Only two
>> > are active servers that are serving the webpage. Also we've a varnish on
>> > front, so are not getting all the load (below 30% in PHP is not much).
>> > About the MDS cache, now I've the mds_cache_memory_limit at 8Mb.
>>
>> What! Please post `ceph daemon mds.<name> config diff`,  `... perf
>> dump`, and `... dump_mempools `  from the server the active MDS is on.
>>
>> > I've tested
>> > also 512Mb, but the CPU usage is the same and the MDS RAM usage grows
>> up to
>> > 15GB (on a 16Gb server it starts to swap and all fails). With 8Mb, at
>> least
>> > the memory usage is stable on less than 6Gb (now is using about 1GB of
>> RAM).
>>
>> We've seen reports of possible memory leaks before and the potential
>> fixes for those were in 12.2.6. How fast does your MDS reach 15GB?
>> Your MDS cache size should be configured to 1-8GB (depending on your
>> preference) so it's disturbing to see you set it so low.
>>
>
> See also the thread "[ceph-users] Fwd: MDS memory usage is very high",
> which had more discussion of that. The MDS daemon seemingly had 9.5GB of
> allocated RSS but only believed 489MB was in use for the cache...
> -Greg
>
>
>>
>> --
>> Patrick Donnelly
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to