Arti, please take a look here -
org.apache.ignite.internal.processors.cache.GridCacheEvictionManager. Pay
attention
to 
org.apache.ignite.internal.processors.cache.GridCacheEvictionManager#notifyPolicy

Thanks!

--Yakov

2015-04-14 20:34 GMT+03:00 Atri Sharma <[email protected]>:

> I agree with you on the thrashing point.rethinking through the idea I
> agree.however, I am imagining a lot of threads blocked on data eviction
> which might be not good for scalable performance (this is an unrelated
> problem but might be relevant).
>
> Can you please guide on code pointers which handle this part please?
>
> I will add a comment.
> On 14 Apr 2015 22:53, "Dmitriy Setrakyan" <[email protected]> wrote:
>
> > Thanks Atri,
> >
> > Interesting algorithm, reminds me of LIRS in some way. I still, however,
> > don't see how it fixes thrashing. The clock-sweep thread still can get
> > behind the loading threads, and we will still end up with the same
> problem.
> >
> > In my opinion the only way to handle it is to have the loading threads
> > themselves do the clean up. This way they won't move forward until the
> data
> > is evicted. The algorithm you suggested may be used in some ways by the
> > cleanup implementation though. Can you please add a comment in
> IGNITE-729?
> >
> > D.
> >
> > On Mon, Apr 13, 2015 at 3:12 AM, Atri Sharma <[email protected]>
> wrote:
> >
> > > Clocksweep is essentially a two chance algorithm. What happens is that
> a
> > > counter is attached to each buffer page and initialized to a suitable
> > value
> > > (take 1 for eg). Now when an eviction needs to happen, a scan is done
> > > through the cache looking for any guy with 0 count. The essential part
> is
> > > that while the scan happens, the counter for *each* page is decremented
> > by
> > > one. The first page with counter 0 is evicted.
> > >
> > > Each time a page is hit, the counter is incremented. We can have a
> limit
> > > for the counter count.
> > >
> > > The idea is that popular pages are kept in the cache and there is no
> need
> > > to sort according to anything or manage expiration explicitly. All we
> > need
> > > to do is do a scan and decrement, and pull the one with 0 count.
> > >
> > > On Mon, Apr 13, 2015 at 3:33 PM, Dmitriy Setrakyan <
> > > [email protected]>
> > > wrote:
> > >
> > > > On Mon, Apr 13, 2015 at 2:52 AM, Atri Sharma <[email protected]>
> > > wrote:
> > > >
> > > > > I would feel that this problem might also be mitigated a bit if we
> > > used a
> > > > > different algorithm than simple LRU (I assume that is what we use).
> > > > > Something like Clocksweep...
> > > > >
> > > >
> > > > I am not too familiar with Clocksweep. Is this something like this?
> > > >
> > > >
> > >
> >
> https://www.usenix.org/legacy/event/usenix05/tech/general/full_papers/jiang/jiang_html/html.html
> > > >
> > > >
> > > >
> > > > >
> > > > > Thoughts?
> > > > >
> > > > > On Mon, Apr 13, 2015 at 3:16 PM, Dmitriy Setrakyan <
> > > > > [email protected]>
> > > > > wrote:
> > > > >
> > > > > > Yes, it is possible for the system to thrash if the data is
> > composed
> > > > only
> > > > > > of unique keys being added and the only parameter to control
> cache
> > > size
> > > > > is
> > > > > > the time-based window. In this case cache can grow indefinitely
> > > because
> > > > > the
> > > > > > TTL thread cannot cope with the load. If you, in addition to TTL
> > also
> > > > > > configure an eviction policy, e.g. FifoEvictionPolicy, then there
> > is
> > > > not
> > > > > > thrashing.
> > > > > >
> > > > > > That's why I suggested that TTL expiration is also implemented
> > > through
> > > > > > eviction policies.
> > > > > >
> > > > > > D.
> > > > > >
> > > > > > On Sun, Apr 12, 2015 at 10:41 PM, Atri Sharma <
> [email protected]
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Out of curiosity, did we never face thrashing with this
> approach?
> > > > > > (assuming
> > > > > > > wild workloads that hit the same page again and again)
> > > > > > >
> > > > > > > On Mon, Apr 13, 2015 at 7:35 AM, Dmitriy Setrakyan <
> > > > > > > [email protected]>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Guys,
> > > > > > > >
> > > > > > > > I have been playing with TTL-based expirations for streaming
> > and
> > > am
> > > > > > > > noticing that often the TTL thread cannot cope with the load
> a
> > > > > > > > data-streamer can generate.
> > > > > > > >
> > > > > > > > Our current approach for TTL is implemented with a thread
> that
> > > > keeps
> > > > > > all
> > > > > > > > cache entries in sorted-by-expire-time order and scans this
> > > ordered
> > > > > > list
> > > > > > > > every time something expires. The scan is optimized, such
> that
> > > once
> > > > > an
> > > > > > > > entry that does not need to expire yet is touched, the thread
> > > goes
> > > > > into
> > > > > > > > wait mode until TTL time for that entry expires. However even
> > > with
> > > > > this
> > > > > > > > optimization, this thread cannot expire the entries fast
> > enough.
> > > > > > > >
> > > > > > > > I think if we handle TTL expirations the same way as we
> handle
> > > > > > evictions,
> > > > > > > > through eviction policies, this problem would go away,
> because
> > > all
> > > > > the
> > > > > > > > threads that add entries to eviction queue are also
> responsible
> > > to
> > > > > > evict
> > > > > > > > the excess entries as well.
> > > > > > > >
> > > > > > > > I have created a parent ticket for this issue:
> > > > > > > > https://issues.apache.org/jira/browse/IGNITE-729
> > > > > > > >
> > > > > > > > Would be nice to implement them in the next couple of weeks.
> > > > > > > >
> > > > > > > > D.
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Regards,
> > > > > > >
> > > > > > > Atri
> > > > > > > *l'apprenant*
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Regards,
> > > > >
> > > > > Atri
> > > > > *l'apprenant*
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Regards,
> > >
> > > Atri
> > > *l'apprenant*
> > >
> >
>

Reply via email to