Thanks for the response, we are following the issue. After its done we can
keep in touch.

Regards.

On Fri, Apr 8, 2016 at 12:56 PM, Yakov Zhdanov <yzhda...@apache.org> wrote:

> Guys, I like the idea of separate pool. But, please note that write-behind
> pool may be slower and may not be able to flush all cache updates to DB. We
> will have to force system threads to help with this.
>
> Tolga, I know that Nick is currently working on async cache callbacks and
> he will be introducing a new pool to system. I think you will be able to
> use it. Ticket for tracking is
> https://issues.apache.org/jira/browse/IGNITE-2004
>
> For now you can start reviewing
> - org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore
> I think we will need to refactor it in the way we move flusher threads
> logic to execute in separate pool instead of dedicated threads we have now.
>
> --Yakov
>
> 2016-04-08 11:11 GMT+03:00 Tolga Kavukcu <kavukcu.to...@gmail.com>:
>
>> Hi Denis,
>>
>> Yes we don't need to have expiration policy, so setting
>> CacheConfiguraiton.setEagerTtl to false solved this problem.
>>
>> So we are testing the whole system, we also found out that using a cache
>> with writeBehind enabled causes a new thread creation for each one. So
>> if we think about future plans and possibilities, it's not a best practice
>> to have increasing number of threads within jvm.
>>
>> We wonder that if there is a option to use a thread pool for writeBehind
>> jobs. If there is not, we could implement it for the community. So if you
>> can guide us where to start, i would be glad :)
>>
>> Thanks.
>>
>> On Fri, Apr 8, 2016 at 1:54 AM, Denis Magda <dma...@gridgain.com> wrote:
>>
>>> Tolga,
>>>
>>> The cleanup threads "ttl-cleanup-workers" are used to eagerly remove
>>> expired cache entries. Expiration policy can be set either a cache wide in
>>> CacheConfiguration or can be used later with cache.withExpirePolicy(...)
>>> calls.
>>> I failed to reproduce your case. What I've done is started 30 caches and
>>> destroyed all of them later. Visual VM showed that all
>>> "ttl-cleanup-workers" were stopped successfully.
>>> What Ignite version do you use?
>>>
>>> In any case if you are not planing to use expiration policy you can set
>>> CacheConfiguraiton.setEagerTtl to false and the ttl workers Threads won't
>>> be created at all.
>>>
>>> Regards,
>>> Denis
>>>
>>>
>>> On 4/7/2016 3:43 PM, Tolga Kavukcu wrote:
>>>
>>> Hi Denis,
>>>
>>> IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE parameter seems like decreased
>>> heap usage. I will run longer tests to check heap behaviour.
>>>
>>> Also i need another help with thread's created by ignite. I found out
>>> that ignite creates a cleanup thread named "ttl-cleanup-worker" for each
>>> cache.  But when cache is destroyed, clean up thread does not deleted.
>>> Instead it waits sleeping state at all.
>>>
>>> My first question is that , is it possible to decrease thread count with
>>> a configuration, like "thread pool with x threads" for all caches.
>>> Secondly, is "unremoved threads" are expected behaviour.
>>>
>>> Thanks.
>>>
>>> On Thu, Apr 7, 2016 at 2:40 PM, Denis Magda <dma...@gridgain.com> wrote:
>>>
>>>> Hi Tolga,
>>>>
>>>> GridDhtPartitionTopologyImpl is created per cache. If you destroy a
>>>> cache this object should be GCed. However you should use cache.destroy()
>>>> for that.
>>>>
>>>> Please also make sure that you make "live set" heap dumps only. Try to
>>>> perform GC explicitly before making the dump because a collector may clean
>>>> dead objects much later depending on its heuristics.
>>>>
>>>> --
>>>> Denis
>>>>
>>>> On 4/7/2016 8:27 AM, Tolga Kavukcu wrote:
>>>>
>>>> Hi Denis,
>>>>
>>>> Thanks for the response. I will try 
>>>> IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE parameter.
>>>> The screnshots taken from eclipse memory analyser which opens and analyses
>>>> heap dump. I understand heap requirement for wrapping and indexing off-heap
>>>> entry positions. But also found out that instances of 
>>>> *org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
>>>> *is constantly increasing within jvm.
>>>>
>>>>
>>>> I also create and destroy so many small caches during the lifecycle, do
>>>> you think that it is possible to destroyed caches leaves a footprint in
>>>> heap.
>>>>
>>>> The previous scrreenshots was dominator tree view of memory analyser. I
>>>> attached again with headers.
>>>>
>>>>  You can see that each of GridDhtPartitionTopologyImpl uses 20mb~ heap.
>>>> And there are 72 instances of GridDhtPartitionTopologyImpl living.
>>>>
>>>> Also i attached screenshots of leak suspect report of memory analyzer,
>>>> which is taken periodically. You an see that instances of 
>>>> *GridDhtPartitionTopologyImpl
>>>> keeps increasing. *
>>>>
>>>> Any ideas or suggestions?
>>>>
>>>> On Wed, Apr 6, 2016 at 6:00 PM, Denis Magda < <dma...@gridgain.com>
>>>> dma...@gridgain.com> wrote:
>>>>
>>>>> Hi Tolga,
>>>>>
>>>>> GridDhtPartitionTopologyImpl contains list of partitions that belong
>>>>> to a specific node. In case of offheap caches each partition (concurrent
>>>>> map) contains set of wrappers around keys->values, stored offheap. The
>>>>> wrapper holds information that's needed to unswap a value or a key to Java
>>>>> heap from offheap when required by a user application.
>>>>> So Ignite requires extra space for internal needs even when offheap
>>>>> mode is used.
>>>>>
>>>>> I would recommend you trying to reduce
>>>>> IgniteSystemProperties.IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE. This is 
>>>>> the
>>>>> size of the queue that keeps deleted entries for internal needs as well.
>>>>> https://apacheignite.readme.io/v1.5/docs/capacity-planning
>>>>>
>>>>> BTW, could you explain what columns from your screenshot mean exactly?
>>>>> What tool did you use to create the memory snapshot?
>>>>>
>>>>> --
>>>>> Denis
>>>>>
>>>>>
>>>>>
>>>>> On 4/6/2016 3:02 PM, Tolga Kavukcu wrote:
>>>>>
>>>>> Hi everyone,
>>>>>
>>>>> I use partitioned ignite cache for a very dynamic data. Means that
>>>>> there are many updates,deletes and puts with around 5m~ row.
>>>>>
>>>>> So to avoid gc pauses i use off-heap mode. But when i analyse heap i
>>>>> see that count and heap size of
>>>>> *org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl*
>>>>>  is
>>>>> increasing constantly.
>>>>>
>>>>> Please see attached screenshots taken from mat heap dump.
>>>>>
>>>>> <bean class="org.apache.ignite.configuration.CacheConfiguration" 
>>>>> name="DEFAULT">    <property name="atomicityMode" value="ATOMIC" />    
>>>>> <property name="cacheMode" value="PARTITIONED" />    <property 
>>>>> name="memoryMode" value="OFFHEAP_TIERED" />    <property name="backups" 
>>>>> value="1" />    <property name="affinity">        <bean 
>>>>> class="org.apache.ignite.cache.affinity.fair.FairAffinityFunction">       
>>>>>      <constructor-arg index="0" type="int" value="6"/>        </bean>    
>>>>> </property>    <property name="writeThrough" value="false" />    
>>>>> <property name="writeBehindEnabled" value="false" /></bean>
>>>>>
>>>>> Thanks for helping out.
>>>>>
>>>>> There are totally 1.2 heap used by GridDhtPartitionTopologyImpl,
>>>>> almost equals to my data size. Do you think that there are problems with
>>>>> configuration.
>>>>>
>>>>>
>>>>> *Tolga KAVUKÇU *
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> *Tolga KAVUKÇU *
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> *Tolga KAVUKÇU *
>>>
>>>
>>>
>>
>>
>> --
>>
>> *Tolga KAVUKÇU*
>>
>
>


-- 

*Tolga KAVUKÇU*

Reply via email to