Hi!
Tried, but the issue still exists.
https://monosnap.com/file/UUbYX4RUyXPZyPxKx97hTwSUSvNtye
On 01/10/2018 13:06, Ilya Kasnacheev wrote:
Hello!
Setting BufferSize to 10000 seems to fix your reproducer's problem:
newContinuousQuery<string,byte[]>(newCacheListener(skippedItems,doubleDelete),true){BufferSize=10000}
but the recommendation is to avoid doing anything synchronous from
Continuous Query's body. Better offload any non-trivial processing to
other threads operating asynchronously.
Regards,
--
Ilya Kasnacheev
сб, 29 сент. 2018 г. в 5:18, Alew <ale...@gmail.com
<mailto:ale...@gmail.com>>:
Hi, attached a reproducer.
Turn off logs to fix the issue. Slow log is not the only reason. More
nodes in a cluster lead to the same behaviour.
Who is responsible for the behavior? Is it .net, java, bad docs or me?
On 24/09/2018 20:03, Alew wrote:
> Hi!
>
> I need a way to consistently get all entries in a replicated
cache and
> then all updates for them while application is working.
>
> I use ContinuousQuery for it.
>
> var cursor = cache.QueryContinuous(new ContinuousQuery<string,
> byte[]>(new CacheListener(), true),
> new ScanQuery<string,
byte[]>()).GetInitialQueryCursor();
>
> But I have some issues with it.
>
> Sometimes cursor returns only part of entries in a cache and cache
> listener does not return them either.
>
> Sometimes cursor and cache listener return the same entry both.
>
> Issue somehow related to amount of work the nodes have to do and
> amount of time between start of the publisher node and
subscriber node.
>
> There are more problems if nodes start at the same time.
>
> Is there a reliable way to do it without controling order of node
> start and pauses between them?
>
>