Using SQL cache with javax cache api
Hi, I created SQL cache, but sometimes I want to use it via javax.cache.api and it doesn't work. I created cache with SQL: CREATE TABLE IF NOT EXISTS Person6 ( id varchar primary key , city_id int, name varchar, age int, company varchar ) WITH "template=replicated,backups=1,wrap_key=false,value_type=java.util.HashMap,cache_name=Person6"; insert into PERSON6(ID, CITY_ID, NAME, AGE, COMPANY) values ( '1', 1, 'TEST', 20, 'Bla' ); insert into PERSON6(ID, CITY_ID, NAME, AGE, COMPANY) values ( '2', 1, 'TEST2', 20, 'Bla 1' ); Next I created client and fetching data with SqlFieldQuery works without problems: SqlFieldsQuery query = new SqlFieldsQuery("SELECT * from Person6"); FieldsQueryCursor> cursor = cache.query(query); but when I tried to query the cache with get: IgniteCache cache = ignite.cache("Person6"); System.out.println("CacheSize: " + cache.size(CachePeekMode.PRIMARY)); System.out.println("1: " + cache.get("1")); I received: CacheSize: 2 1: null To solve it I added withKeepBinary(): IgniteCache cache = ignite.cache("Person6").withKeepBinary(); and then I received valid data: CacheSize: 2 1: java.util.HashMap [idHash=660595570, hash=-1179353910, CITY_ID=1, ID=null, NAME=TEST, AGE=20, COMPANY=Bla] but now I cannot add HashMap to the cache: Map value = new HashMap<>(); value.put("ID", uuid); value.put("CITY_ID", 1); value.put("NAME", "c"); value.put("AGE", 90); value.put("COMPANY", "A"); cache. put(uuid, value); throws: Exception in thread "main" javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: Unexpected binary object class [type=class org.apache.ignite.internal.processors.cacheobject.UserCacheObjectImpl] and adding works only with objects created with BinaryObjectBuilder. Is it expected behaviour for caches String->HashMap? -- Pozdrawiam / Regards, Dominik Przybysz
Re: Near cache configuration for partitioned cache
Hi, between first test with near cache configured in xml and configuring near cache via java code i cleaned data. But why should I clean data on server nodes when I add near cache only on client nodes to gain performance? Now I am testing ignite so I could do this, but it won't be acceptable when running on production. wt., 24 mar 2020 o 16:47 Evgenii Zhuravlev napisał(a): > Hi, > > I see that you have persistence, did you clean the persistence directory > before changing configuration? > > Evgenii > > вт, 24 мар. 2020 г. в 02:33, Dominik Przybysz : > >> Hi, >> I configured client node as you described in you email and heap usage on >> server nodes does not look as expected: >> >> >> +===+ >> | Node ID8(@), IP | CPUs | Heap Used | CPU Load | Up Time >> | Size (Primary / Backup) | Hi/Mi/Rd/Wr | >> >> +===+ >> | 112132F5(@n3), 10.100.0.230 | 4| 36.49 % | 76.50 % | >> 00:13:10.385 | Total: 75069 (75069 / 0)| Hi: 19636833 | >> | | | | | >> | Heap: 75069 (75069 / ) | Mi: 39403166 | >> | | | | | >> | Off-Heap: 0 (0 / 0) | Rd: 5903 | >> | | | | | >> | Off-Heap Memory: 0| Wr: 0| >> >> +-+--+---+--+--+-+--+ >> | 74786280(@n2), 10.100.0.239 | 4| 33.94 % | 81.07 % | >> 01:06:23.896 | Total: 74817 (74817 / 0)| Hi: 22447160 | >> | | | | | >> | Heap: 74817 (74817 / ) | Mi: 44987105 | >> | | | | | >> | Off-Heap: 0 (0 / 0) | Rd: 67434265 | >> | | | | | >> | Off-Heap Memory: 0| Wr: 0| >> >> +-+--+---+--+--+-+--+ >> | 5AB7B5FD(@n0), 10.100.0.205 | 4| 69.39 % | 15.50 % | >> 00:52:54.529 | Total: 2706142 (1460736 / 1245406) | Hi: 43629857 | >> | | | | | >> | Heap: 15 (15 / ) | Mi: 0| >> | | | | | >> | Off-Heap: 2556142 (1310736 / 1245406) | Rd: 43629857 | >> | | | | | >> | Off-Heap Memory: | Wr: 52347667 | >> >> +-+--+---+--+--+-+--+ >> | 0608CF95(@n1), 10.100.0.206 | 4| 42.24 % | 17.07 % | >> 00:52:39.093 | Total: 2706142 (1395406 / 1310736) | Hi: 43644401 | >> | | | | | >> | Heap: 15 (15 / ) | Mi: 0| >> | | | | | >> | Off-Heap: 2556142 (1245406 / 1310736) | Rd: 43644401 | >> | | | | | >> | Off-Heap Memory: | Wr: 52347791 | >> >> +-------+ >> >> 1st and 2nd entries are clients and 3rd and 4th are server nodes. >> My client nodes has LRU near cache with size 10 and I am querying >> cache with 15 random data. >> But why there are heap entries on server nodes? >> >> wt., 24 mar 2020 o 08:40 Dominik Przybysz >> napisał(a): >> >>> Hi, >>> exactly I want to have near cache only on client nodes. I will check >>> your advice with dynamic cache. >>> I have two server nodes which keep data and I want to get data from them >>> via my client nodes. >>> I am also curious what had happened with heap on server nodes. >>> >>> pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev >>> napisał(a): >>> >>>> Hi, >>>> >>>> Near Cache configuration in xml creates near caches for all nodes, >>
Re: Near cache configuration for partitioned cache
Hi, I configured client node as you described in you email and heap usage on server nodes does not look as expected: +===+ | Node ID8(@), IP | CPUs | Heap Used | CPU Load | Up Time | Size (Primary / Backup) | Hi/Mi/Rd/Wr | +===+ | 112132F5(@n3), 10.100.0.230 | 4| 36.49 % | 76.50 % | 00:13:10.385 | Total: 75069 (75069 / 0)| Hi: 19636833 | | | | | | | Heap: 75069 (75069 / ) | Mi: 39403166 | | | | | | | Off-Heap: 0 (0 / 0) | Rd: 5903 | | | | | | | Off-Heap Memory: 0| Wr: 0| +-+--+---+--+--+-+--+ | 74786280(@n2), 10.100.0.239 | 4| 33.94 % | 81.07 % | 01:06:23.896 | Total: 74817 (74817 / 0)| Hi: 22447160 | | | | | | | Heap: 74817 (74817 / ) | Mi: 44987105 | | | | | | | Off-Heap: 0 (0 / 0) | Rd: 67434265 | | | | | | | Off-Heap Memory: 0| Wr: 0| +-+--+---+--+--+-+--+ | 5AB7B5FD(@n0), 10.100.0.205 | 4| 69.39 % | 15.50 % | 00:52:54.529 | Total: 2706142 (1460736 / 1245406) | Hi: 43629857 | | | | | | | Heap: 15 (15 / ) | Mi: 0| | | | | | | Off-Heap: 2556142 (1310736 / 1245406) | Rd: 43629857 | | | | | | | Off-Heap Memory: | Wr: 52347667 | +-+--+---+--+--+-+--+ | 0608CF95(@n1), 10.100.0.206 | 4| 42.24 % | 17.07 % | 00:52:39.093 | Total: 2706142 (1395406 / 1310736) | Hi: 43644401 | | | | | | | Heap: 15 (15 / ) | Mi: 0| | | | | | | Off-Heap: 2556142 (1245406 / 1310736) | Rd: 43644401 | | | | | | | Off-Heap Memory: | Wr: 52347791 | +---+ 1st and 2nd entries are clients and 3rd and 4th are server nodes. My client nodes has LRU near cache with size 10 and I am querying cache with 15 random data. But why there are heap entries on server nodes? wt., 24 mar 2020 o 08:40 Dominik Przybysz napisał(a): > Hi, > exactly I want to have near cache only on client nodes. I will check your > advice with dynamic cache. > I have two server nodes which keep data and I want to get data from them > via my client nodes. > I am also curious what had happened with heap on server nodes. > > pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev > napisał(a): > >> Hi, >> >> Near Cache configuration in xml creates near caches for all nodes, >> including server nodes. As far as I understand, you want to have them on >> client side only, right? If so, I'd recommend to create them dynamically: >> https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes >> >> What kind of operations are you running? Are you trying to access data on >> server from another server node? In any case, so many entries in Heap on >> server nodes looks strange. >> >> Evgenii >> >> пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz : >> >>> Hi, >>> I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned >>> cache and configuration: >>> >>> >>> http://www.springframework.org/schema/beans"; >>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; >>>xsi:schemaLocation=" >>>http://www.springframework.org/schema/beans >>>http://www.springframework.org/schema/beans/spring-beans.xsd";> >>> >>> >> class="org.apache.ignite.configurati
Re: Near cache configuration for partitioned cache
Hi, exactly I want to have near cache only on client nodes. I will check your advice with dynamic cache. I have two server nodes which keep data and I want to get data from them via my client nodes. I am also curious what had happened with heap on server nodes. pon., 23 mar 2020 o 23:13 Evgenii Zhuravlev napisał(a): > Hi, > > Near Cache configuration in xml creates near caches for all nodes, > including server nodes. As far as I understand, you want to have them on > client side only, right? If so, I'd recommend to create them dynamically: > https://www.gridgain.com/docs/latest/developers-guide/near-cache#creating-near-cache-dynamically-on-client-nodes > > What kind of operations are you running? Are you trying to access data on > server from another server node? In any case, so many entries in Heap on > server nodes looks strange. > > Evgenii > > пн, 23 мар. 2020 г. в 07:08, Dominik Przybysz : > >> Hi, >> I am using Ignite 2.7.6 and I have 2 server nodes with one partitioned >> cache and configuration: >> >> >> http://www.springframework.org/schema/beans"; >>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; >>xsi:schemaLocation=" >>http://www.springframework.org/schema/beans >>http://www.springframework.org/schema/beans/spring-beans.xsd";> >> >> > class="org.apache.ignite.configuration.IgniteConfiguration"> >> >> > class="org.apache.ignite.configuration.CacheConfiguration"> >> >> >> >> >> >> >> >> >> > class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi"> >> >> >> >> >> >> > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> >> >> >> >> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"> >> >> >> ignite1:47100..47200 >> ignite2:47100..47200 >> >> >> >> >> >> >> >> >> > class="org.apache.ignite.configuration.ClientConnectorConfiguration"> >> >> >> >> >> >> > class="org.apache.ignite.configuration.DataStorageConfiguration"> >> >> > class="org.apache.ignite.configuration.DataRegionConfiguration"> >> >> >> >> >> >> >> >> >> >> >> > value="{{ignite_system_thread_pool_size}}"/> >> > value="{{ignite_cluster_data_streamer_thread_pool_size}}"/> >> >> >> >> I loaded 1,5mln entries into cluster via data streamer. >> I tested this topology without near cache and everything was fine, but >> when I tried to add near cache to my client nodes then server nodes started >> to keep data on heap and reads rps dramatically fell down (150k rps to 10k >> rps). >> >> My clients' configuration: >> >> >> http://www.springframework.org/schema/beans"; >>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; >>xsi:schemaLocation=" >>http://www.springframework.org/schema/beans >>http://www.springframework.org/schema/beans/spring-beans.xsd";> >> > class="org.apache.ignite.configuration.IgniteConfiguration"> >> >> >> > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> >> >> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"> >> >> >> ignite1:47100..47200 &g
Near cache configuration for partitioned cache
| | | | | Heap: 1499814 (1499814 / ) | Mi: 0| | | | | | | Off-Heap: 150 (769776 / 730224) | Rd: 17335702 | | | | | | | Off-Heap Memory: | Wr: 0| +-+ 1st and 2nd entry is client node, 3rd and 4th is server node. What is wrong with my near cache configuration? Do I have to mirror all cache configuration on server node into client nodes configuration? (for example, when I miss backup parameter I received exception "Affinity key backups mismatch") -- Pozdrawiam / Regards, Dominik Przybysz
Ignite thread pool configuration
Hi, I read the thread pool documentation ( https://apacheignite.readme.io/docs/thread-pools) but I need more information which thread pools I should configure/tune for my setup. I have 2 server nodes (partitioned cache) and some clients (without near or local chaches) which add cache entries via data streamer and get data via getAsync operation. As far as I understand the documentation: - on server nodes I should configure especially publicThreadPoolSize on which put and get operation occur? - on client nodes I should configure dataStreamerThreadPoolSize (for data streamer) and publicThreadPoolSize (on which put and get occur)? -- Pozdrawiam / Regards, Dominik Przybysz
Re: Cache metrics on server nodes does not update correctly
Hi, I used ignite in version 2.7.6 (but I have also seen this behaviour on other 2.7.x versions) and there aren't any near or local cache. I expect that if I ask distributed cache about key which does not exist then the miss metric will be incremented. śr., 11 mar 2020 o 11:35 Andrey Gura napisał(a): > Denis, > > I don't sure that I understand what is expected behavior should be. > There are local and aggregated cluster wide metrics. I don't know > which one used by Visor because I never used it :) > > Also it would be great to know what version of Apache Ignite used in > described case. I remember some bug with metrics aggregation during > discovery metrics message round trip. > > On Wed, Mar 11, 2020 at 12:05 AM Denis Magda wrote: > > > > @Nikolay Izhikov , @Andrey Gura , > > could you folks check out this thread? > > > > I have a feeling that what Dominik is describing was talked out before > and > > rather some sort of a limitation than an issue with the current > > implementation. > > > > - > > Denis > > > > > > On Tue, Mar 3, 2020 at 11:41 PM Dominik Przybysz > > wrote: > > > > > Hi, > > > I am trying to use partitioned cache on server nodes to which I connect > > > with client node. Statistics of cache in the cluster are updated, but > only > > > for hits metric - misses metric is always 0. > > > > > > To reproduce this problem I created cluster of two nodes: > > > > > > Server node 1 adds 100 random test cases and prints cache statistics > > > continuously: > > > > > > public class IgniteClusterNode1 { > > > public static void main(String[] args) throws InterruptedException > { > > > IgniteConfiguration igniteConfiguration = new > > > IgniteConfiguration(); > > > > > > CacheConfiguration cacheConfiguration = new > CacheConfiguration(); > > > cacheConfiguration.setName("test"); > > > cacheConfiguration.setCacheMode(CacheMode.PARTITIONED); > > > cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC); > > > cacheConfiguration.setStatisticsEnabled(true); > > > igniteConfiguration.setCacheConfiguration(cacheConfiguration); > > > > > > TcpCommunicationSpi communicationSpi = new > TcpCommunicationSpi(); > > > communicationSpi.setLocalPort(47500); > > > igniteConfiguration.setCommunicationSpi(communicationSpi); > > > > > > TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi(); > > > discoverySpi.setLocalPort(47100); > > > discoverySpi.setLocalPortRange(100); > > > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); > > > ipFinder.setAddresses(Arrays.asList("127.0.0.1:47100..47200", > > > "127.0.0.1:48100..48200")); > > > igniteConfiguration.setDiscoverySpi(discoverySpi); > > > > > > try (Ignite ignite = Ignition.start(igniteConfiguration)) { > > > try (IgniteCache cache = > > > ignite.getOrCreateCache("test")) { > > > new Random().ints(1000).map(i -> Math.abs(i % > > > 1000)).distinct().limit(100).forEach(i -> { > > > String key = "data_" + i; > > > String value = > UUID.randomUUID().toString(); > > > cache.put(key, value); > > > } > > > ); > > > } > > > while (true) { > > > System.out.println(ignite.cache("test").metrics()); > > > Thread.sleep(5000); > > > } > > > } > > > } > > > } > > > > > > Server node 2 only prints cache statistics continuously: > > > > > > public class IgniteClusterNode2 { > > > public static void main(String[] args) throws InterruptedException > { > > > IgniteConfiguration igniteConfiguration = new > > > IgniteConfiguration(); > > > > > > CacheConfiguration cacheConfiguration = new > CacheConfiguration(); > > > cacheConfiguration.setName("test"); > > > cacheConfiguration.setCacheMode(CacheMode.PARTITIONED); > > > cacheConfiguration.setStatisticsEnabled(true); > > > igniteConfiguration.setCacheConfiguration(cacheConfiguration)
Zookeeper discovery with Ignite 2.8.0 - class NoClassDefFound
Hi, Ignite cluster after upgrade to Ignite 2.8.0 fails to start, because of missing class. I copied optional jars: cp -r libs/optional/ignite-zookeeper/* libs/ and received during node start: [15:16:13] Security status [authentication=off, tls/ssl=off] [15:16:14,514][SEVERE][main][IgniteKernal] Got exception while starting (will rollback startup routine). java.lang.NoClassDefFoundError: org/apache/zookeeper/data/Id at org.apache.zookeeper.ZooDefs$Ids.(ZooDefs.java:111) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient.(ZookeeperClient.java:68) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.joinTopology(ZookeeperDiscoveryImpl.java:783) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoinAndWait(ZookeeperDiscoveryImpl.java:714) at org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi.spiStart(ZookeeperDiscoverySpi.java:483) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:943) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1960) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1276) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1035) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:921) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659) at org.apache.ignite.Ignition.start(Ignition.java:346) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300) Caused by: java.lang.ClassNotFoundException: org.apache.zookeeper.data.Id at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 19 more [15:16:14] Ignite node stopped OK [uptime=00:00:02.201] java.lang.NoClassDefFoundError: org/apache/zookeeper/data/Id at org.apache.zookeeper.ZooDefs$Ids.(ZooDefs.java:111) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient.(ZookeeperClient.java:68) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.joinTopology(ZookeeperDiscoveryImpl.java:783) at org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoinAndWait(ZookeeperDiscoveryImpl.java:714) at org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi.spiStart(ZookeeperDiscoverySpi.java:483) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:943) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1960) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1276) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1035) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:921) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659) at org.apache.ignite.Ignition.start(Ignition.java:346) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300) Caused by: java.lang.ClassNotFoundException: org.apache.zookeeper.data.Id at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 19 more Failed to start grid: org/apache/zookeeper/data/Id Note! You may use 'USER_LIBS' environment variable to specify your classpath. -- Pozdrawiam / Regards, Dominik Przybysz
Cache metrics on server nodes does not update correctly
alOverflowCnt=-1, writeBehindCriticalOverflowCnt=-1, writeBehindErrorRetryCnt=-1, writeBehindBufSize=-1, totalPartitionsCnt=1024, rebalancingPartitionsCnt=0, rebalancedKeys=47, estimatedRebalancingKeys=47, keysToRebalanceLeft=0, rebalancingKeysRate=0, rebalancingBytesRate=0, rebalanceStartTime=0, rebalanceFinishTime=0, rebalanceClearingPartitionsLeft=0, keyType=java.lang.Object, valType=java.lang.Object, isStoreByVal=true, isStatisticsEnabled=true, isManagementEnabled=false, isReadThrough=false, isWriteThrough=false, isValidForReading=true, isValidForWriting=true] Visor confirms my observation: Nodes for: test(@c0) +==+ | Node ID8(@), IP | CPUs | Heap Used | CPU Load | Up Time| Size (Primary / Backup) | Hi/Mi/Rd/Wr | +==+ | D09713CD(@n1), 10.10.10.14 | 8| 2.81 %| 0.53 % | 00:15:00.291 | Total: 47 (47 / 0) | Hi: 0 | || | | | | Heap: 0 (0 / )| Mi: 0 | || | | | | Off-Heap: 47 (47 / 0) | Rd: 0 | || | | | | Off-Heap Memory: | Wr: 0 | ++--+---+--+--+--+-+ | B8CCBCAB(@n0), 10.10.10.14 | 8| 1.15 %| 0.50 % | 00:15:04.122 | Total: 53 (53 / 0) | Hi: 3 | || | | | | Heap: 0 (0 / )| Mi: 48 | || | | | | Off-Heap: 53 (53 / 0) | Rd: 51 | || | | | | Off-Heap Memory: | Wr: 100 | +--+ I have seen the same behaviour with rest api as with the thin client, but I don't have prepared test case for this. Is there a good way to correctly configure collecting statistics on server nodes? Should I create an issue for this problem? -- Pozdrawiam / Regards, Dominik Przybysz