[
https://issues.apache.org/jira/browse/GEODE-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570927#comment-16570927
]
Jason Huynh commented on GEODE-5518:
------------------------------------
How is the special process getting these records and how is it iterating over
the records and deleting them?
Shouldn't this discussion be taking place on the user list instead of creating
a ticket? There isn't a lot of information for a developer to work on this
ticket at the moment...
> some records in the region are not fetched when executing fetch query
> ---------------------------------------------------------------------
>
> Key: GEODE-5518
> URL: https://issues.apache.org/jira/browse/GEODE-5518
> Project: Geode
> Issue Type: Bug
> Components: core, querying
> Reporter: yossi reginiano
> Priority: Major
>
> hi all,
> we are using geode 1.4 and facing the following:
> we are starting to adopt the putAll functions which accepts a bulk of records
> and persists them into the region
> we have noticed that the process that fetches the records from the region
> (executing fetch command with bulks of 1000) , from time to time missing a
> record or two , which is causing this records to be left in the region as a
> "Zombie" - because now current index is greater then this record's index
> now this started to happen only when we started to use the putAll function -
> prior to this we did not face any such issue
> also - when we are using putAll with only 1 record at a time it is also
> working fine
> has anybody faced this?
> is there some constraint on the number of records that can be sent to the
> putAll function?
> thanks in advance
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)