So we have the following situation:
* Put 5000 unique keys with putAll
* Use cache iterator, observe less than 5000 keys

Is that correct?

On Thu, Feb 3, 2022 at 7:53 PM Sumit Deshinge <sumit.deshi...@gmail.com>
wrote:

> Yes, that I am sure, because the keys are generated using ignite uuid,
> which internally is based on hostname, and all the clients are hosted on
> machines with unique hostnames.
>
> On Wed, Feb 2, 2022 at 3:23 PM Pavel Tupitsyn <ptupit...@apache.org>
> wrote:
>
>> Are you sure that all entry keys are unique?
>> E.g. if you do 5000 puts but some keys are the same, the result will be
>> less than 5000 entries.
>>
>> On Wed, Feb 2, 2022 at 12:27 PM Sumit Deshinge <sumit.deshi...@gmail.com>
>> wrote:
>>
>>> No, cache does not have entries. Somehow the number of entries returned
>>> are less than the number of entries put by all thin clients.
>>>
>>> On Wed, Feb 2, 2022 at 1:33 PM Pavel Tupitsyn <ptupit...@apache.org>
>>> wrote:
>>>
>>>> Do you mean that cache has some entries, but the iterator does not
>>>> return them?
>>>>
>>>> On Wed, Feb 2, 2022 at 10:38 AM Sumit Deshinge <
>>>> sumit.deshi...@broadcom.com> wrote:
>>>>
>>>>> Hi Pavel,
>>>>>
>>>>> I am trying to remove the data from one cache (on which I am
>>>>> iterating) to another cache in transaction.
>>>>> When the iterator says no further elements, I again try getting a new
>>>>> iterator after few seconds, to check if there is any new data available.
>>>>>
>>>>> In this process, I am missing one or two entries. But if I remove the
>>>>> transaction or add optmistic+serializable instead
>>>>> of pessimistic+repeatable_read transaction type, then this loss of data is
>>>>> not observed with the same steps mentioned.
>>>>>
>>>>>
>>>>> On Wed, Feb 2, 2022 at 1:00 PM Pavel Tupitsyn <ptupit...@apache.org>
>>>>> wrote:
>>>>>
>>>>>> > While iterating over the cache, data is removed from the cache
>>>>>>
>>>>>> Sumit, as I understand, you read data while you also remove it, so it
>>>>>> is not clear what the expectation is.
>>>>>>
>>>>>> On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <
>>>>>> sumit.deshi...@gmail.com> wrote:
>>>>>>
>>>>>>> Thank you Surinder and Pavel. I will give this approach a try.
>>>>>>> But even in case of iterator, when I try refreshing the iterator
>>>>>>> once it reached to last record, i.e. new iterator, it does not give all 
>>>>>>> the
>>>>>>> entries as described in the first email steps.
>>>>>>>
>>>>>>> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <ptupit...@apache.org>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Cache iterator does not guarantee that you'll see all entries if
>>>>>>>> there are concurrent updates, I think you are facing a race condition.
>>>>>>>> Please try ContinuousQuery as Surinder suggests, it will catch all
>>>>>>>> data changes.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <redni...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Just curious, why can't we use continuous query here with
>>>>>>>>> "appropriate" event type to write to another cache. So your listener 
>>>>>>>>> will
>>>>>>>>> do below things
>>>>>>>>> 1. Write entry to another cache
>>>>>>>>> 2. remove entry from source cache
>>>>>>>>>
>>>>>>>>> Just an idea, please correct if I am wrong
>>>>>>>>>
>>>>>>>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <
>>>>>>>>> sumit.deshi...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> We are running apache ignite 2.11 with cache configuration
>>>>>>>>>> FULL_SYNC and REPLICATED mode.
>>>>>>>>>>
>>>>>>>>>> Our use case is :
>>>>>>>>>>
>>>>>>>>>> 1. *Multiple thin clients are adding data* into a cache using
>>>>>>>>>> putAll operation.
>>>>>>>>>> 2. *Simultaneously the server is reading the data* using server
>>>>>>>>>> cache iterator.
>>>>>>>>>> 3. *While iterating over the cache, data is removed from the
>>>>>>>>>> cache and added into new cache using a transaction*, i.e.
>>>>>>>>>> transaction with remove and put operations. We have transaction 
>>>>>>>>>> *concurrency
>>>>>>>>>> - pessimistic and isolation levels - repeatable_read*.
>>>>>>>>>>
>>>>>>>>>> But we are seeing few missing entries at server side, i.e. server
>>>>>>>>>> is not able to read all the data put by client. E.g. in one of the 
>>>>>>>>>> run, all
>>>>>>>>>> thin clients put 5000 entries, server is able to read only 4999 
>>>>>>>>>> entries.
>>>>>>>>>> Here we saw 1 entry was not read by server.
>>>>>>>>>>
>>>>>>>>>> *Another observation is that, if we remove the transaction in the
>>>>>>>>>> second step above, or use optimistic transaction with serializable
>>>>>>>>>> isolation level, then this issue is not observed*.
>>>>>>>>>>
>>>>>>>>>> What could be the possible problem in this use case
>>>>>>>>>> with pessimistic concurrency and repeatable_read isolation level? 
>>>>>>>>>> This is
>>>>>>>>>> particularly important as this configuration is resulting in data 
>>>>>>>>>> loss.
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Regards,
>>>>>>>>>> Sumit Deshinge
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Regards,
>>>>>>> Sumit Deshinge
>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Sumit Deshinge
>>>>>
>>>>> R&D Engineer | Symantec Enterprise Division
>>>>>
>>>>> Broadcom Software
>>>>>
>>>>> Email: Sumit Deshinge <sumit.deshi...@broadcom.com>
>>>>>
>>>>>
>>>>> This electronic communication and the information and any files
>>>>> transmitted with it, or attached to it, are confidential and are intended
>>>>> solely for the use of the individual or entity to whom it is addressed and
>>>>> may contain information that is confidential, legally privileged, 
>>>>> protected
>>>>> by privacy laws, or otherwise restricted from disclosure to anyone else. 
>>>>> If
>>>>> you are not the intended recipient or the person responsible for 
>>>>> delivering
>>>>> the e-mail to the intended recipient, you are hereby notified that any 
>>>>> use,
>>>>> copying, distributing, dissemination, forwarding, printing, or copying of
>>>>> this e-mail is strictly prohibited. If you received this e-mail in error,
>>>>> please return the e-mail to the sender, delete it from your computer, and
>>>>> destroy any printed copy of it.
>>>>
>>>>
>>>
>>> --
>>> Regards,
>>> Sumit Deshinge
>>>
>>>
>
> --
> Regards,
> Sumit Deshinge
>
>

Reply via email to