Shivakumar's system configuration and mine could be different. But I feel, we 
are seeing the same issue here.

Deleting tables via a single thick client causes other thick clients to go out 
of memory. This OOM issue was reported below here.
http://apache-ignite-users.70518.x6.nabble.com/Ignite-2-7-0-Ignite-client-memory-leak-td28938.html
Now, this thread has the server and client configs, client JVM heap-dump 
attached. Please go through this.

Reproducibility of this problem
take Ignite 2.7.6
- allocation about -XMX 1GB for each of the thick clients, connect them to a 
ignite cluster.
- Let the ignite cluster have about 500 dummy tables. Keep deleting them.
Eventually, you will see thick clients failing with OOM.

Now coming to your questions
1. Does the same occur if IgniteCache.destroy() is called instead of DROP TABLE?
All the caches we destroy are SQL caches. SO we use drop table. 
IgniteCache.destroy gives an exception.
Exception in thread "main" class org.apache.ignite.IgniteException: Only cache 
created with cache API may be removed with direct call to destroyCache 
[cacheName=SQL_PUBLIC_PERSON1000]

2. Does the same occur if SQL is not enabled for a cache?
We did not check this and it is not a use case that we have. We primary use SQL 
caches.

3. It would be nice to see IgniteConfiguration and CacheConfiguration
causing problems.
Attached in a different thread, specified above.

4. Need to figure out why almost all pages are dirty. It might be a clue.
This is probably the scenario in Shivakumar sent. In my case, all the data is 
in memory, we have about 100GB in memory and the data regions together are 
about 128GB.

I don't want to confuse this thread as Shivakumar scenario could be different
I don't mind discussing this on the other thread I opened (specified above. 
memory leaks)
Bottom line is: deleting tables from one thick client, is causing other thick 
clients to go OOM. This can be seen on 2.7.6 too.


Reply via email to