> On May 25, 2017, at 5:17 AM, Ivan V. <iveselovs...@gridgain.com> wrote:
> 
> I think, we should answer the following questions.
> 1) does the interface org.apache.ignite.cache.eviction.EvictionPolicy and
> *all* its implementations now de-facto get deprecated? (I mean, the
> question appears to be wider than just the IGFS eviction policy).

Now, this interface is used for “on-heap caching” scenario:
https://apacheignite.readme.io/docs/page-memory#on-heap-caching

Here is we clarify what this interfaced is used for:
https://apacheignite.readme.io/docs/evictions#on-heap-cache-entries-based-eviction

> 2) is on-heap data access faster than off-heap? If yes, how large the
> on-heap vs. off-heap difference is?

It’s comparable. Hope the guys can share precise numbers.

In general, the on-heap caching is for scenarios when you do a lot of cache 
reads on server nodes that work with cache entries in the binary form or invoke 
cache entries' deserialization. For instance, this might happen when a 
distributed computation or deployed service gets some data from caches for 
further processing.

> 3) does an ability to evict from on-heap layer make any sense at all if the
> same data are anyway backed up by off-heap cache layer, that has different
> eviction policies?

As soon as you enable the on-heap caching you should set up a eviction policy. 
Otherwise the Java heap can grow endlessly.

—
Denis

> 
> 
> On Thu, May 25, 2017 at 2:42 PM, Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
> 
>> Guys, I think it makes little or no sense to keep blocks on-heap. If I
>> understand correctly, the eviction policy was used in combined modes where
>> partial data eviction is allowed. To make this work in the new PageMemory
>> architecture, we only need to make sure that IGFS block size equals to the
>> data page free space size. In this case, our standard offheap eviction
>> policy will be semantically equal to the old per-block eviction policy. No
>> need to go on-heap at all.
>> 
>> Thoughts?
>> 
>> 2017-05-25 4:21 GMT+03:00 Denis Magda <dma...@apache.org>:
>> 
>>> Hi Ivan,
>>> 
>>> I’m for this approach
>>> 
>>>> 2) leave it as is, but explain in javadocs that it only works for
>> on-heap
>>>> layer, that does not in fact evict  blocks from the underlying offheap
>>>> layer.
>>> 
>>> because it should be feasible to enable on-heap caching for IGFS, right?
>>> Using the memory policies. So, I would reimplement the tests with the
>>> on-heap caching enabling and checking up that data is pushed out of the
>>> heap.
>>> 
>>> —
>>> Denis
>>> 
>>>> On May 24, 2017, at 9:57 AM, Ivan V. <iveselovs...@gridgain.com>
>> wrote:
>>>> 
>>>> Hi, colleagues,
>>>> 
>>>> as Ignite caches moved to paged offheap memory , the
>>>> IgfsPerBlockLruEvictionPolicy does not seem to work as expected any
>> more,
>>>> because
>>>> 1) interface org.apache.ignite.cache.eviction.EvictionPolicy now only
>>>> defines "eviction from on-heap", not a real eviction, because each
>>> on-heap
>>>> cache is now accompanied with underlying off-heap cache. It can become
>>>> "real eviction" only for "on-heap-only" mode caches, once they get
>>>> implemented.
>>>> 2) for off-heap eviction an entire page is evicted, not a specific k-v
>>>> pair, and LRU policy is not exactly LRU any more (see
>>>> org.apache.ignite.configuration.DataPageEvictionMode#RANDOM_LRU). So,
>> it
>>>> appears to be impossible to re-implement this policy for the off-heap
>>> layer.
>>>> 
>>>> Thus, now IgfsPerBlockLruEvictionPolicy is not quite valid, and some of
>>>> corresponding tests fail
>>>> (org.apache.ignite.internal.processors.igfs.
>>> IgfsCachePerBlockLruEvictionPolicySelfTest#testDataSizeEviction,
>>>> org.apache.ignite.internal.processors.igfs.
>>> IgfsCachePerBlockLruEvictionPolicySelfTest#testBlockCountEviction)
>>>> 
>>>> So, the options I see are:
>>>> 1) deprecate/remove IgfsPerBlockLruEvictionPolicy ;
>>>> 2) leave it as is, but explain in javadocs that it only works for
>> on-heap
>>>> layer, that does not in fact evict  blocks from the underlying offheap
>>>> layer.
>>>> 
>>>> Please share your opinions.
>>> 
>>> 
>> 

Reply via email to