[ https://issues.apache.org/jira/browse/IGNITE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367301#comment-16367301 ]
ASF GitHub Bot commented on IGNITE-7717: ---------------------------------------- GitHub user Jokser opened a pull request: https://github.com/apache/ignite/pull/3536 IGNITE-7717 Update discovery caches on topologies after exchanges merge You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7717 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3536.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3536 ---- commit 054a71827b0329793afd48720caa084a530a49f9 Author: Pavel Kovalenko <jokserfn@...> Date: 2018-02-16T13:36:10Z IGNITE-7717 Update discovery caches on topologies after exchanges merge. ---- > testAssignmentAfterRestarts is flaky on TC > ------------------------------------------ > > Key: IGNITE-7717 > URL: https://issues.apache.org/jira/browse/IGNITE-7717 > Project: Ignite > Issue Type: Bug > Components: cache > Affects Versions: 2.5 > Reporter: Pavel Kovalenko > Assignee: Pavel Kovalenko > Priority: Major > Labels: MakeTeamcityGreenAgain > > There is infinite awaiting of partitions map exchange: > {noformat} > [2018-02-15 13:41:46,180][WARN > ][test-runner-#1%persistence.IgnitePdsCacheAssignmentNodeRestartsTest%][root] > Waiting for topology map update > [igniteInstanceName=persistence.IgnitePdsCacheAssignmentNodeRestartsTest0, > cache=ignite-sys-cache, cacheId=-2100569601, topVer=AffinityTopologyVersion > [topVer=11, minorTopVer=0], p=0, affNodesCnt=5, ownersCnt=3, > affNodes=[126cbc54-1b9f-46b8-a978-b6c61ee00001, > 0971749e-e313-4c57-99ab-40400b100000, 84f71ca6-6213-43a0-91ea-42eca5100002, > 3d781b31-ed38-49c8-8875-bdfa2fa00003, 8f4bdf1c-a2c8-45e8-acd7-64bb45600004], > owners=[0971749e-e313-4c57-99ab-40400b100000, > 126cbc54-1b9f-46b8-a978-b6c61ee00001, 3d781b31-ed38-49c8-8875-bdfa2fa00003], > topFut=GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent > [evtNode=TcpDiscoveryNode [id=3d781b31-ed38-49c8-8875-bdfa2fa00003, > addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47502], discPort=47502, order=9, > intOrder=6, lastExchangeTime=1518691298151, loc=false, > ver=2.5.0#19700101-sha1:00000000, isClient=false], topVer=9, > nodeId8=0971749e, msg=Node joined: TcpDiscoveryNode > [id=3d781b31-ed38-49c8-8875-bdfa2fa00003, addrs=[127.0.0.1], > sockAddrs=[/127.0.0.1:47502], discPort=47502, order=9, intOrder=6, > lastExchangeTime=1518691298151, loc=false, ver=2.5.0#19700101-sha1:00000000, > isClient=false], type=NODE_JOINED, tstamp=1518691298244], > crd=TcpDiscoveryNode [id=0971749e-e313-4c57-99ab-40400b100000, > addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, > intOrder=1, lastExchangeTime=1518691306165, loc=true, > ver=2.5.0#19700101-sha1:00000000, isClient=false], > exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=9, > minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode > [id=3d781b31-ed38-49c8-8875-bdfa2fa00003, addrs=[127.0.0.1], > sockAddrs=[/127.0.0.1:47502], discPort=47502, order=9, intOrder=6, > lastExchangeTime=1518691298151, loc=false, ver=2.5.0#19700101-sha1:00000000, > isClient=false], topVer=9, nodeId8=0971749e, msg=Node joined: > TcpDiscoveryNode [id=3d781b31-ed38-49c8-8875-bdfa2fa00003, addrs=[127.0.0.1], > sockAddrs=[/127.0.0.1:47502], discPort=47502, order=9, intOrder=6, > lastExchangeTime=1518691298151, loc=false, ver=2.5.0#19700101-sha1:00000000, > isClient=false], type=NODE_JOINED, tstamp=1518691298244], nodeId=3d781b31, > evt=NODE_JOINED], added=true, initFut=GridFutureAdapter > [ignoreInterrupts=false, state=DONE, res=true, hash=2121252210], init=true, > lastVer=GridCacheVersion [topVer=0, order=1518691297806, nodeOrder=0], > partReleaseFut=PartitionReleaseFuture [topVer=AffinityTopologyVersion > [topVer=9, minorTopVer=0], futures=[ExplicitLockReleaseFuture > [topVer=AffinityTopologyVersion [topVer=9, minorTopVer=0], futures=[]], > TxReleaseFuture [topVer=AffinityTopologyVersion [topVer=9, minorTopVer=0], > futures=[]], AtomicUpdateReleaseFuture [topVer=AffinityTopologyVersion > [topVer=9, minorTopVer=0], futures=[]], DataStreamerReleaseFuture > [topVer=AffinityTopologyVersion [topVer=9, minorTopVer=0], futures=[]]]], > exchActions=null, affChangeMsg=null, initTs=1518691298244, > centralizedAff=false, forceAffReassignment=false, changeGlobalStateE=null, > done=true, state=DONE, evtLatch=0, remaining=[], super=GridFutureAdapter > [ignoreInterrupts=false, state=DONE, res=AffinityTopologyVersion [topVer=11, > minorTopVer=0], hash=1135515588]], locNode=TcpDiscoveryNode > [id=0971749e-e313-4c57-99ab-40400b100000, addrs=[127.0.0.1], > sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, intOrder=1, > lastExchangeTime=1518691306165, loc=true, ver=2.5.0#19700101-sha1:00000000, > isClient=false]] > {noformat} > This happens because of inconsistency of discoCache (cacheGrpAffNodes map) on > different nodes after restart. -- This message was sent by Atlassian JIRA (v7.6.3#76005)