Hi,

I believe this is a function of the OS, not the application.  In modern OS
implementations (Linux variants, anyway), for performance reasons, once
memory is allocated from the kernel, it is not returned to the kernel until
the process exits.

The performance hit of doing context switching to the kernel to allocate
more memory is too expensive overall when the memory would probably be
allocated again anyway.

Did you recently patch or upgrade the OS that the Ignite pods are running
on?

I'm not sure why Persistence would make a difference?  My suspicion is
OS profiling of the application?
When Persistence is enabled, Ignite plays the role of just a caching layer
- rewriting data to the same memory blocks before syncing to disk.
Whereas, when Persistence is not enabled, Ignite plays the role of data
store.

As you probably know, if you add spec.resources.requests.memory to your pod
specs, k8s will allocate the pods appropriately to the large nodes
available.

Greg


On Fri, Dec 23, 2022 at 2:46 AM Humphrey Lopez <[email protected]> wrote:

> Hello,
>
> I've been doing some test with Ignite 2.14, creating one cache filling it
> with data and then clearing back the cache, eventually destroy the cache.
> I'm using the OpenCensus metrics to get the statistics:
> - TotalAllocatedSize
> When I test it with persistence Enabled I see that TotalAllocatedSize
> drops back to 0 when I invoke the destroy cache method.
> But when I do the same with persistence Disabled, the TotalAllocatedSize
> doesn't drop back.
>
> I was hoping that when clearing the cache the memory drops, or at least
> when I do destroy(). My question is why is this memory given back when
> Persistence is Enabled and not when Persistence is disabled, I was hoping
> to see the same effect.
>
> I can create a reproducer for this, but maybe it's a known thing.
>
> Humphrey
>

Reply via email to