Re: CacheContinuousQuery memory use and best practices

2019-04-23 Thread Ilya Kasnacheev
Hello!

Since nobody is chiming in, my opinion is that:

1) Please don't try exceptions out of this handler!
2) I don't think you can switch it off, this is implemented as in other
places of JCache.
3) I think it still makes sense. Avoid blocking in continuous query handler.

As for your client-server scenario, I'm not sure what to do. If you have a
lot of small updates, please try to increase pageSize perhaps? It's 1024 by
default.

Regards,
-- 
Ilya Kasnacheev


вт, 16 апр. 2019 г. в 14:23, johnny_rotten :

> Hi, I'm looking into an issue where we have an Ignite (2.6) client node
> doing
> a CacheContinuousQuery on a cache full of binary objects, and eventually
> the
> server node gets Out of Memory. Usually the server node is happy to run
> with
> 2-3 gig of Heap size, however when this client is running with a
> CacheContinuousQuery on a cache, it can go >20gig, until the client is
> stopped, and then 20mins later objects are garbage collected on the server
> node and it goes back down to 2gig. In heap dumps I see its full of
> CacheContinuousQueryEvents.
>
> Some questions:
>
> 1) In my client Continuous Query handler code, what happens when an error
> is
> thrown:
> /private void handleCacheEvent(CacheEntryEvent MyBinaryObject> event) {
>  // try to deserialize event but fails with error.
> }/
>
> I don't see any exception thrown, will this event stay in the cache as an
> unconsumed event? Potentially causing a leak for events that have not been
> handled correctly?
>
> 2) Why does CacheContinuousQueryEvent keep a reference to 'oldVal'.. i.e
> the
> old value in the cache? This could be causing a problem, as we don't care
> about old values in the cache.. can we switch that off? Why isn't that the
> default
>
> 3) In the method that handles cache events, is it best practice to put the
> cache event straight onto a blocking queue to make sure there is no slow
> consumer problem? makes sense to me but I don't see it recommended
> anywhere.
> If we don't I can imagine the outbound queue of the server node growing...
>
> thanks for any pointers!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: CacheContinuousQuery memory use and best practices

2019-04-17 Thread johnny_rotten
Also some details on the cache config:

atomicityMode=TRANSACTIONAL
cacheMode = PARTITIONED
backups=1

.. the rest are default. Cache value types are binary objects.

So here's a simpler question, if my client node does a cache.put(key=String,
value=BinaryObject) 100 times on the same key, when are the previous 99
entries put up for garbage collection? 

Is it when the backup node queue has all of the events that have taken place
?
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


CacheContinuousQuery memory use and best practices

2019-04-16 Thread johnny_rotten
Hi, I'm looking into an issue where we have an Ignite (2.6) client node doing
a CacheContinuousQuery on a cache full of binary objects, and eventually the
server node gets Out of Memory. Usually the server node is happy to run with
2-3 gig of Heap size, however when this client is running with a
CacheContinuousQuery on a cache, it can go >20gig, until the client is
stopped, and then 20mins later objects are garbage collected on the server
node and it goes back down to 2gig. In heap dumps I see its full of
CacheContinuousQueryEvents.

Some questions:

1) In my client Continuous Query handler code, what happens when an error is
thrown:
/private void handleCacheEvent(CacheEntryEvent event) {
 // try to deserialize event but fails with error.
}/

I don't see any exception thrown, will this event stay in the cache as an
unconsumed event? Potentially causing a leak for events that have not been
handled correctly?

2) Why does CacheContinuousQueryEvent keep a reference to 'oldVal'.. i.e the
old value in the cache? This could be causing a problem, as we don't care
about old values in the cache.. can we switch that off? Why isn't that the
default

3) In the method that handles cache events, is it best practice to put the
cache event straight onto a blocking queue to make sure there is no slow
consumer problem? makes sense to me but I don't see it recommended anywhere.
If we don't I can imagine the outbound queue of the server node growing...

thanks for any pointers!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/