Yes, it is possible for the system to thrash if the data is composed only
of unique keys being added and the only parameter to control cache size is
the time-based window. In this case cache can grow indefinitely because the
TTL thread cannot cope with the load. If you, in addition to TTL also
configure an eviction policy, e.g. FifoEvictionPolicy, then there is not
thrashing.

That's why I suggested that TTL expiration is also implemented through
eviction policies.

D.

On Sun, Apr 12, 2015 at 10:41 PM, Atri Sharma <[email protected]> wrote:

> Out of curiosity, did we never face thrashing with this approach? (assuming
> wild workloads that hit the same page again and again)
>
> On Mon, Apr 13, 2015 at 7:35 AM, Dmitriy Setrakyan <
> [email protected]>
> wrote:
>
> > Guys,
> >
> > I have been playing with TTL-based expirations for streaming and am
> > noticing that often the TTL thread cannot cope with the load a
> > data-streamer can generate.
> >
> > Our current approach for TTL is implemented with a thread that keeps all
> > cache entries in sorted-by-expire-time order and scans this ordered list
> > every time something expires. The scan is optimized, such that once an
> > entry that does not need to expire yet is touched, the thread goes into
> > wait mode until TTL time for that entry expires. However even with this
> > optimization, this thread cannot expire the entries fast enough.
> >
> > I think if we handle TTL expirations the same way as we handle evictions,
> > through eviction policies, this problem would go away, because all the
> > threads that add entries to eviction queue are also responsible to evict
> > the excess entries as well.
> >
> > I have created a parent ticket for this issue:
> > https://issues.apache.org/jira/browse/IGNITE-729
> >
> > Would be nice to implement them in the next couple of weeks.
> >
> > D.
> >
>
>
>
> --
> Regards,
>
> Atri
> *l'apprenant*
>

Reply via email to