Hi Naveen,

I think just stopping updates is not enough to make a consistent snapshot
of the partition stores.
You must ensure that all updates are also checkpointed to disk. Otherwise,
to restore a valid snapshot you must copy WAL as well as partition stores.
You can try to deactivate the source cluster, make a copy of the partition
stores, and then activate it again.


чт, 9 сент. 2021 г. в 15:42, Naveen Kumar <naveen.band...@gmail.com>:

> Any pointers or clues on this issue.
>
> If it the issue with the source cluster or something to do with the target
> cluster ?
> Does the clean restart of the source cluster help here in any
> way, inconsistent partitions becoming consistent etc ?
>
> Thanks
>
> On Wed, Sep 8, 2021 at 12:12 PM Naveen Kumar <naveen.band...@gmail.com>
> wrote:
>
>> Hi
>>
>> We are using Ignite 2.8.1
>>
>> We are trying to build a new cluster by restoring the datastore from
>> another working cluster.
>> Steps followed
>>
>> 1. Stopped the updates on the source cluster
>> 2. Took a copy of datastore on each node and transferred to the
>> destination node
>> 3. started nodes on the destination cluster
>>
>> AFter the cluster is activated, we could see count mismatch for 2 caches
>> (around 15K records) and we found some warnings for these 2 caches.
>> Attached the exact warning,
>>
>> [GridDhtPartitionsExchangeFuture] Partition states validation has failed
>> for group: CL_CUSTOMER_KV, msg: Partitions cache sizes are inconsistent for
>> part 310: [lvign002b.xxxx.com=874, lvign001b.xxxx.com=875] etc..
>>
>> What could be the reason for this count mismatch.
>>
>> Thanks
>>
>>
>>
>>
>>
>> --
>> Thanks & Regards,
>> Naveen Bandaru
>>
>
>
> --
> Thanks & Regards,
> Naveen Bandaru
>

Reply via email to