Hi! Mikhail, Pavel

Indeed, the two pieces of information provided by Pavel may not be very similar 
to my situation. 

I am using gridgain-community-8.8.9 version. 

The server is configured in file ignite-server-config1.xml, while the thin 
client and continuous query are configured in file ThinClientExample.java.

The scenario you described is very accurate. As a supplement, in the second 
step, 40 thin clients need to be used to register 40 listeners. I have tried 
using up to 10 listeners before, but this situation did not occur. It seems to 
be related to the number of listeners.

从 Windows 版邮件发送

发件人: Mikhail Petrov
发送时间: 2023年10月13日 18:42
收件人: [email protected]
主题: Re: On the issue of continuous query function monitoring in thinclients

Pavel, in fact, I don't think that the mentioned ticket has anything to do with 
the described problem.

Mengyu Jing, What version of the Apache Ignite are you using? Could you please 
attach the configuration of the Ignite server node, thin client configuration 
and continuous query you are running?

Do I understand your scenario correctly:

1. Start 12 server nodes
2. Register a Continuous Query via thin clients
3. Start loading the cache using thin clients
4. Kill one of the server nodes
5. Restart the disconnected node so that it rejoins the cluster
6. CacheContinuousQueryEventBuffe#backupQ is no longer being cleared on the 
restarted node
On 13.10.2023 07:30, Pavel Tupitsyn wrote:
Hello, 

We are working on this issue [1] [2]

[1] https://issues.apache.org/jira/browse/IGNITE-20424
[2] https://github.com/apache/ignite/pull/10980

On Thu, Oct 12, 2023 at 6:27 PM Mengyu Jing <[email protected]> wrote:
Hello, Igniters!
 
I started an Ignite cluster with 12 nodes, but the cluster did not have 
persistence enabled. 
 
I used 40 thin clients to listen for cache changes through continuous query 
function, and there were also some other thin clients constantly inserting data 
into the cache. When I hung up a node in the cluster using the kill command, 
the node quickly stopped and was pulled back up by K8S, but the node continued 
to restart itself.
 
By analyzing the logs, it can be found that the cluster was kicked out due to 
the continuous growth of heap memory. Through analyzing the heap, I found that 
the reason is that the variable named backupQ in the 
CacheContinuousQueryEventBuffer class is occupying memory. It seems that the 
data in this queue is no longer deleted, but is growing infinitely. 
 
Will restarting cause this node to not receive CacheContinuousQueryBatchAck 
messages?
Has anyone encountered this situation before?
 
从 Windows 版邮件发送
 

Attachment: ignite-server-config1.xml
Description: XML document

Attachment: ThinClientExample.java
Description: Binary data

Reply via email to