Has anyone some advice for this situation?
Thanks.

On Mon, May 21, 2018, 21:29 Vadym Vasiuk <[email protected]> wrote:

> HI All,
>
> I also noticed that if I bounce one of two nodes and data is then
> distributed as I described (one node has all the PRIMARY entries and second
> one has only BACKUP entries) - after I restart second node - data is again
> distributed normally.
> But what if I dont want to restart all nodes to make data distribution
> normal?
>
> Thanks.
>
>
> On Mon, May 21, 2018 at 2:49 PM, Vadym Vasiuk <[email protected]> wrote:
>
>> Hi Stanislav,
>>
>> Yes persistence is enabled and version is 2.4.
>>
>> Have tried to update topology after one node restart with below code from
>> client:
>>
>> Collection<ClusterNode> nodes = ignite.cluster().forServers.nodes();
>>
>> ignite.cluster().setBaselineTopology(nodes);
>>
>> And after that I still have all primary entries stored on one node and
>> backup entried on the one which I restarted.
>>
>> Maybe that method of topology update is incomplete?
>>
>> Thanks in advance for your reply.
>>
>> On Mon, May 21, 2018, 14:29 Stanislav Lukyanov <[email protected]>
>> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> Do you have native persistence enabled?
>>>
>>> What is your Ignite version?
>>>
>>>
>>>
>>> If the Ignite version is 2.4+ and you have persistence, the problem is
>>> most likely with baseline topology.
>>>
>>> You need to make sure that the restarted node is in the baseline for the
>>> rebalance to happen, either by keeping its old consistentId or by updating
>>> the baseline.
>>>
>>>
>>>
>>> Check out the documentation here:
>>> https://apacheignite.readme.io/docs/baseline-topology
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Stan
>>>
>>>
>>>
>>> *From: *Вадим Васюк <[email protected]>
>>> *Sent: *20 мая 2018 г. 17:39
>>> *To: *[email protected]
>>> *Subject: *Cache not rebalanced after one node is restarted
>>>
>>>
>>>
>>> Hi All,
>>>
>>>
>>>
>>> I have a 2 server nodes (with persistence enabled) and one client node
>>> started on my PC.
>>>
>>> From client I activate cluster and then create a simple cache with below
>>> configuration and add 10 entries to it:
>>>
>>>
>>>
>>> CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();
>>> cfg.setName(*C*);
>>> cfg.setBackups(1);
>>> cfg.setRebalanceDelay(1000L);
>>> cfg.setCacheMode(CacheMode.*PARTITIONED*);
>>> cfg.setAtomicityMode(CacheAtomicityMode.*ATOMIC*);
>>> cfg.setRebalanceMode(CacheRebalanceMode.*SYNC*);
>>> IgniteCache cache = ignite.getOrCreateCache(cfg);
>>>
>>> IntStream.*range*(cache.size(CachePeekMode.*ALL*)+1, 
>>> cache.size(CachePeekMode.*ALL*)+1+10).forEach(i -> {
>>>             cache.put(i, Utils.*getRandonString*(2));
>>>         }
>>> );
>>>
>>> I have a simple computation task to check which entry went to which
>>> server and here is the output after I inserted data into the cache:
>>>
>>>     server name: 544a56b3-1364-420e-bdbb-380a1460df72    cache entries:
>>> 1,2,4,5,7,8    backup entries: 3,6,9,10
>>>
>>>     server name: eb630559-c6b4-46a4-a98b-3ba2abfefce9     cache entries:
>>> 3,6,9,10        backup entries: 1,2,4,5,7,8
>>>
>>>
>>>
>>> As you can see all entries are saved and have backups on each other
>>> nodes.
>>>
>>>
>>>
>>> However after I restart one of these server nodes, I can see such data
>>> distribution:
>>>
>>>     server name: eb630559-c6b4-46a4-a98b-3ba2abfefce9     cache entries:
>>> 1,2,3,4,5,6,7,8,9,10    backup entries:
>>>
>>>     server name: 544a56b3-1364-420e-bdbb-380a1460df72   cache entries:
>>>                                    backup entries: 1,2,3,4,5,6,7,8,9,10
>>>
>>>
>>>
>>> As you can see data after one node restart is no longer distributed
>>> nicely.
>>>
>>> And from this moment I cannot make it redistribute.
>>>
>>>
>>>
>>> Could you please advice what I may be doing wrong?
>>>
>>>
>>>
>>> Thanks for your reply.
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Sincerely Yours
>>> Vadim Vasyuk
>>>
>>>
>>>
>>
>
>
> --
> Sincerely Yours
> Vadim Vasyuk
>

Reply via email to