Thanks for revealing this issue!

I don't understand why should we disallow calling clear().

One way how it can be re-implemented is:
1. acquire write locks on all segments;
2. clear them;
3. reset size to 0;
4. release locks.

Another approach is to calculate inside
ConcurrentLinkedHashMap.Segment.clear() how many entries you actually
deleted and then call size.addAndGet(...).

In both cases you'll have to replace LongAdder with AtomicLong.

On Tue, Jul 24, 2018 at 4:03 PM, Ilya Kasnacheev <ilya.kasnach...@gmail.com>
wrote:

> Hello igniters!
>
> So I was working on a fix for
> https://issues.apache.org/jira/browse/IGNITE-9056
> The reason for test flakiness turned out our ConcurrentLinkedHashMap (and
> its tautological cousin GridBoundedConcurrentLinkedHashMap) is broken :(
>
> When you do clear(). its size counter is not updated. So sizex() will
> return the old size after clear, and if there's maxCnt set, after several
> clear()s it will immediately evict entries after they are inserted,
> maintaining map size at 0.
>
> This is scary since indexing internals make intense use of
> ConcurrentLinkedHashMaps.
>
> My suggestion for this fix is to avoid ever calling clear(), making it
> throw UnsupportedOperationException and recreating/replacing map instead of
> clear()ing it. Unless somebody is going to stand up and fix
> ConcurrentLinkedHashMap.clear() properly. Frankly speaking I'm afraid of
> touching this code in any non-trivial way.
>
> --
> Ilya Kasnacheev
>



-- 
Best regards,
Ilya

Reply via email to