> While iterating over the cache, data is removed from the cache

Sumit, as I understand, you read data while you also remove it, so it is
not clear what the expectation is.

On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <sumit.deshi...@gmail.com>
wrote:

> Thank you Surinder and Pavel. I will give this approach a try.
> But even in case of iterator, when I try refreshing the iterator once it
> reached to last record, i.e. new iterator, it does not give all the entries
> as described in the first email steps.
>
> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <ptupit...@apache.org>
> wrote:
>
>> Cache iterator does not guarantee that you'll see all entries if there
>> are concurrent updates, I think you are facing a race condition.
>> Please try ContinuousQuery as Surinder suggests, it will catch all data
>> changes.
>>
>>
>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <redni...@gmail.com>
>> wrote:
>>
>>> Just curious, why can't we use continuous query here with "appropriate"
>>> event type to write to another cache. So your listener will do below things
>>> 1. Write entry to another cache
>>> 2. remove entry from source cache
>>>
>>> Just an idea, please correct if I am wrong
>>>
>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <sumit.deshi...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> We are running apache ignite 2.11 with cache configuration FULL_SYNC
>>>> and REPLICATED mode.
>>>>
>>>> Our use case is :
>>>>
>>>> 1. *Multiple thin clients are adding data* into a cache using putAll
>>>> operation.
>>>> 2. *Simultaneously the server is reading the data* using server cache
>>>> iterator.
>>>> 3. *While iterating over the cache, data is removed from the cache and
>>>> added into new cache using a transaction*, i.e. transaction with
>>>> remove and put operations. We have transaction *concurrency -
>>>> pessimistic and isolation levels - repeatable_read*.
>>>>
>>>> But we are seeing few missing entries at server side, i.e. server is
>>>> not able to read all the data put by client. E.g. in one of the run, all
>>>> thin clients put 5000 entries, server is able to read only 4999 entries.
>>>> Here we saw 1 entry was not read by server.
>>>>
>>>> *Another observation is that, if we remove the transaction in the
>>>> second step above, or use optimistic transaction with serializable
>>>> isolation level, then this issue is not observed*.
>>>>
>>>> What could be the possible problem in this use case
>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>> particularly important as this configuration is resulting in data loss.
>>>>
>>>> --
>>>> Regards,
>>>> Sumit Deshinge
>>>>
>>>>
>
> --
> Regards,
> Sumit Deshinge
>
>

Reply via email to