Hi Tolga,

GridDhtPartitionTopologyImpl contains list of partitions that belong to a specific node. In case of offheap caches each partition (concurrent map) contains set of wrappers around keys->values, stored offheap. The wrapper holds information that's needed to unswap a value or a key to Java heap from offheap when required by a user application. So Ignite requires extra space for internal needs even when offheap mode is used.

I would recommend you trying to reduce IgniteSystemProperties.IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE. This is the size of the queue that keeps deleted entries for internal needs as well.
https://apacheignite.readme.io/v1.5/docs/capacity-planning

BTW, could you explain what columns from your screenshot mean exactly? What tool did you use to create the memory snapshot?

--
Denis


On 4/6/2016 3:02 PM, Tolga Kavukcu wrote:
Hi everyone,

I use partitioned ignite cache for a very dynamic data. Means that there are many updates,deletes and puts with around 5m~ row.

So to avoid gc pauses i use off-heap mode. But when i analyse heap i see that count and heap size of /org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl/ is increasing constantly.

Please see attached screenshots taken from mat heap dump.

<bean class="org.apache.ignite.configuration.CacheConfiguration" name="DEFAULT"> <property name="atomicityMode" value="ATOMIC" /> <property name="cacheMode" value="PARTITIONED" /> <property name="memoryMode" value="OFFHEAP_TIERED" /> <property name="backups" value="1" /> <property name="affinity"> <bean class="org.apache.ignite.cache.affinity.fair.FairAffinityFunction"> <constructor-arg index="0" type="int" value="6"/> </bean> </property><property name="writeThrough" value="false" /> <property name="writeBehindEnabled" value="false" /> </bean>
Thanks for helping out.
There are totally 1.2 heap used by GridDhtPartitionTopologyImpl, almost equals to my data size. Do you think that there are problems with configuration.

*Tolga KAVUKÇU
*

Reply via email to