Hi, I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned cache and configuration:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="cacheConfiguration"> <bean class="org.apache.ignite.configuration.CacheConfiguration"> <property name="name" value="cache1"/> <property name="cacheMode" value="PARTITIONED"/> <property name="statisticsEnabled" value="true"/> <property name="backups" value="1"/> </bean> </property> <property name="communicationSpi"> <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi"> <property name="localPort" value="47500"/> </bean> </property> <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="localPort" value="47100"/> <property name="localPortRange" value="100"/> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"> <property name="addresses"> <list> <value>ignite1:47100..47200</value> <value>ignite2:47100..47200</value> </list> </property> </bean> </property> </bean> </property> <property name="clientConnectorConfiguration"> <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration"> <property name="port" value="10800"/> </bean> </property> <property name="dataStorageConfiguration"> <bean class="org.apache.ignite.configuration.DataStorageConfiguration"> <property name="defaultDataRegionConfiguration"> <bean class="org.apache.ignite.configuration.DataRegionConfiguration"> <property name="persistenceEnabled" value="true"/> <property name="metricsEnabled" value="true"/> </bean> </property> <property name="metricsEnabled" value="true"/> </bean> </property> <property name="consistentId" value="{{hostname}}"/> <property name="systemThreadPoolSize" value="{{ignite_system_thread_pool_size}}"/> <property name="dataStreamerThreadPoolSize" value="{{ignite_cluster_data_streamer_thread_pool_size}}"/> </bean> </beans> I loaded 1,5mln entries into cluster via data streamer. I tested this topology without near cache and everything was fine, but when I tried to add near cache to my client nodes then server nodes started to keep data on heap and reads rps dramatically fell down (150k rps to 10k rps). My clients' configuration: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="clientMode" value="true"/> <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"> <property name="addresses"> <list> <value>ignite1:47100..47200</value> <value>ignite2:47100..47200</value> </list> </property> </bean> </property> </bean> </property> <property name="dataStreamerThreadPoolSize" value="8"/> <property name="systemThreadPoolSize" value="8"/> <property name="cacheConfiguration"> <bean class="org.apache.ignite.configuration.CacheConfiguration"> <!-- Cache configuration has to be the same as in server config --> <property name="name" value="cache1"/> <property name="cacheMode" value="PARTITIONED"/> <property name="statisticsEnabled" value="true"/> <property name="backups" value="1"/> <property name="nearConfiguration"> <bean class="org.apache.ignite.configuration.NearCacheConfiguration"> <property name="nearEvictionPolicyFactory"> <bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory"> <property name="maxSize" value="100000"/> </bean> </property> </bean> </property> </bean> </property> </bean> </beans> On visor i see: Nodes for: cache1(@c0) +=================================================================================================================================+ | Node ID8(@), IP | CPUs | Heap Used | CPU Load | Up Time | Size (Primary / Backup) | Hi/Mi/Rd/Wr | +=================================================================================================================================+ | BCA8F378(@n2), 10.100.0.239 | 4 | 32.32 % | 2.17 % | 00:38:33.071 | Total: 55204 (55204 / 0) | Hi: 1671212 | | | | | | | Heap: 55204 (55204 / <n/a>) | Mi: 35034768 | | | | | | | Off-Heap: 0 (0 / 0) | Rd: 36705980 | | | | | | | Off-Heap Memory: 0 | Wr: 0 | +-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+ | 905F83EE(@n3), 10.100.0.230 | 4 | 52.56 % | 6.67 % | 00:38:33.401 | Total: 54051 (54051 / 0) | Hi: 1766495 | | | | | | | Heap: 54051 (54051 / <n/a>) | Mi: 34283753 | | | | | | | Off-Heap: 0 (0 / 0) | Rd: 36050248 | | | | | | | Off-Heap Memory: 0 | Wr: 0 | +-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+ | 793E1BC9(@n1), 10.100.0.206 | 4 | 99.33 % | 38.43 % | 00:51:11.877 | Total: 2999836 (2230060 / 769776) | Hi: 17323596 | | | | | | | Heap: 1499836 (1499836 / <n/a>) | Mi: 0 | | | | | | | Off-Heap: 1500000 (730224 / 769776) | Rd: 17323596 | | | | | | | Off-Heap Memory: <n/a> | Wr: 0 | +-----------------------------+------+-----------+----------+--------------+---------------------------------------+--------------+ | 0147FB02(@n0), 10.100.0.205 | 4 | 96.48 % | 40.33 % | 00:51:11.820 | Total: 2999814 (2269590 / 730224) | Hi: 17335702 | | | | | | | Heap: 1499814 (1499814 / <n/a>) | Mi: 0 | | | | | | | Off-Heap: 1500000 (769776 / 730224) | Rd: 17335702 | | | | | | | Off-Heap Memory: <n/a> | Wr: 0 | +---------------------------------------------------------------------------------------------------------------------------------+ 1st and 2nd entry is client node, 3rd and 4th is server node. What is wrong with my near cache configuration? Do I have to mirror all cache configuration on server node into client nodes configuration? (for example, when I miss backup parameter I received exception "Affinity key backups mismatch") -- Pozdrawiam / Regards, Dominik Przybysz