Hello!

1. I'm not sure, but perhaps this is due to necessity to track nodes with
near cache to keep it in sync.

2. I'm not sure there's the best way. Near cache is a cache, and
"cache" usually means "hot subset". The only way to be sure about complete
in-sync is to have a server node working on replicated cache, with
FULL_SYNC.

3. You may open it in a singleton service's init(), for example. Still
there is no guarantee that it is always up, only that it's never down for a
long time. If you really need to catch every write always, consider a cache
store.

Regards,
-- 
Ilya Kasnacheev


вт, 23 февр. 2021 г. в 01:32, bhlewka <bhle...@invidi.com>:

> I have 3 questions:
>
> 1. Why do the server nodes use on heap storage when a client initializes a
> dynamic near cache? We have never noticed any on-heap storage usage before
> using a near cache on one of our clients.
>
>     Observe no caches initialized on server
>
>     Initialize a normal cache and load 10 entries using client A, then
> disconnect client A
>
>     Observe loaded data, with 0 on-heap entries and no clients connected
>
>     Connect client B with a near cache and get 2 records, populating the
> near cache with 2 records.
>
>     Observe 4 on heap entries while Client B is connected. Observe Client B
> has 2 records in its near cache.
>
>     Disconnect client B, Observe 2 on heap entries and no clients
> connected
>
>     Why do we have some on-heap entries?
>
> 2. What is the best way to ensure the near cache is kept completely in-sync
> with the server cache, would it be sufficient to use a continuous query
> that
> makes a cache.get() call whenever the continuous query gets a CREATED
> event?
> The cache will keep a small enough amount of data that we do not have a
> need
> for an eviction policy.
>
> 3. What is the best way to keep a continuous query open indefinitely?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to