Hi Yakov,

I am trying to process data based on primary node calculation using
mapKeyToNode function of cache's affinity function. I expect there is no
remote access.  I will try to summarize problem into a reproducible code
piece.

Thanks for your help.

On Tue, Feb 21, 2017 at 11:09 AM, Yakov Zhdanov <yzhda...@apache.org> wrote:

> Tolga, this looks like you do cache.get() and key resides on remote node.
> So, yes, local node waits for response from remote node.
>
> --Yakov
>
> 2017-02-21 10:23 GMT+03:00 Tolga Kavukcu <kavukcu.to...@gmail.com>:
>
>> Hi Val,Everyone
>>
>> I am able to overcome with write behind issue and can process exteremly
>> fast in single node. But when i switched to multinode with partitioned
>> mode. My threads waiting at some condition. There are 16 threads processing
>> data all waits at same trace. Adding the thread dump.
>>
>>  java.lang.Thread.State: WAITING (parking)
>> at sun.misc.Unsafe.park(Native Method)
>> - parking to wait for  <0x0000000711093898> (a
>> org.apache.ignite.internal.processors.cache.distributed.dht.
>> GridPartitionedSingleGetFuture)
>> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAn
>> dCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcqu
>> ireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
>> eSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.get
>> 0(GridFutureAdapter.java:161)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.get
>> (GridFutureAdapter.java:119)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:487)
>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .get(GridCacheAdapter.java:4629)
>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .get(GridCacheAdapter.java:1386)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .get(IgniteCacheProxy.java:1118)
>> at com.intellica.evam.engine.cache.dao.ScenarioCacheDao.getCurr
>> entScenarioRecord(ScenarioCacheDao.java:35)
>>
>> What might be the reason of the problem. Does it waits for a response
>> from other node ?
>>
>> -Regards.
>>
>> On Fri, Feb 10, 2017 at 7:31 AM, Tolga Kavukcu <kavukcu.to...@gmail.com>
>> wrote:
>>
>>> Hi Val,
>>>
>>> Thanks for your tip, with enough memory i believe write-behind queue can
>>> handle peak times.
>>>
>>> Thanks.
>>>
>>> Regards.
>>>
>>> On Thu, Feb 9, 2017 at 10:44 PM, vkulichenko <
>>> valentin.kuliche...@gmail.com> wrote:
>>>
>>>> Hi Tolga,
>>>>
>>>> There is a back-pressure mechanism to ensure that node doesn't run out
>>>> of
>>>> memory because of too long write behind queue. You can try increasing
>>>> writeBehindFlushSize property to relax it.
>>>>
>>>> -Val
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context: http://apache-ignite-users.705
>>>> 18.x6.nabble.com/Cache-write-behind-optimization-tp10527p10531.html
>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> *Tolga KAVUKÇU*
>>>
>>
>>
>>
>> --
>>
>> *Tolga KAVUKÇU*
>>
>
>


-- 

*Tolga KAVUKÇU*

Reply via email to