Hello,

I am running into strange problems with Ignite Native Persistence when the data 
exceeds the size of the cache (memory).

I have 3 nodes with 8 GB each, one cache, using 1 backup. I am using SQL 
queries, and the cache has 3 indexes.

I am loading 9.6M objects, using IgniteDataStreamer in a client. The work/db 
directory is about 13 GB on each node. Everything is fine.

Then I am trying to load another 4.8M objects. Here is where the problems start:
1) Loading becomes slow, less than 1000 objects per second.
2) Object are missed (not inserted). There is no indication, neither any logs 
nor any feedback to the client that does the loading.
3) The client that does the loading eventually hangs, waiting on a future. It 
never returns to my code.
4) Another client doing SQL queries eventually hangs also.
5) Restarting clients does not help, they will hang again. Thus deactivating 
the cluster to allow a server restart is not possible.

There are no logs to indicate any problems. No errors or warnings. Only 
periodic metrics logs.

The first time I ran into the problem, each server had 16 GB of RAM, and the 
problem happened when the work/db directory was about 35 GB.

This leads me to believe that the problem is related to the number of objects 
becoming bigger than what the cache can hold.

I am using simple native persistence, no special configuration.

        <property name="persistentStoreConfiguration">
            <bean 
class="org.apache.ignite.configuration.PersistentStoreConfiguration"/>
        </property>

ver. 2.1.0#20170720-sha1:a6ca5c8a

OS: Linux 3.10.0-514.el7.x86_64 amd64

VirtualBox VMs (all on same host), with dynamic disk (one drive on the host)

Roger

Reply via email to