Hello, Anton. Thanks for the PoC.
> finds correct values according to LWW strategy Can you, please, clarify what is LWW strategy? В Ср, 03/04/2019 в 17:19 +0300, Anton Vinogradov пишет: > Ilya, > > This is impossible due to a conflict between some isolation levels and > get-with-consistency expectations. > Basically, it's impossible to perform get-with-consistency after the other > get at !READ_COMMITTED transaction. > The problem here is that value should be cached according to the isolation > level, so get-with-consistency is restricted in this case. > Same problem we have at case get-with-consistency after put, so we have > restriction here too. > So, the order matter. :) > > See OperationRestrictionsCacheConsistencyTest [1] for details. > > [1] > https://github.com/apache/ignite/blob/8b0b0c3e1bde93ff9c4eb5667d794dd64a8b06f0/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/consistency/OperationRestrictionsCacheConsistencyTest.java > > On Wed, Apr 3, 2019 at 4:54 PM Ilya Kasnacheev <ilya.kasnach...@gmail.com> > wrote: > > > Hello! > > > > Sounds useful especially for new feature development. > > > > Can you do a run of all tests with cache.forConsistency(), see if there are > > cases that fail? > > > > Regards, > > -- > > Ilya Kasnacheev > > > > > > ср, 3 апр. 2019 г. в 16:17, Anton Vinogradov <a...@apache.org>: > > > > > Igniters, > > > > > > Sometimes, at real deployment, we're faced with inconsistent state across > > > the topology. > > > This means that somehow we have different values for the same key at > > > different nodes. > > > This is an extremely rare situation, but, when you have thousands of > > > terabytes of data, this can be a real problem. > > > > > > Apache Ignite provides a consistency guarantee, each affinity node should > > > contain the same value for the same key, at least eventually. > > > But this guarantee can be violated because of bugs, see IEP-31 [1] for > > > details. > > > > > > So, I created the issue [2] to handle such situations. > > > The main idea is to have a special cache.withConsistency() proxy allows > > > checking a fix inconsistency on get operation. > > > > > > I've created PR [3] with following improvements (when > > > cache.withConsistency() proxy used): > > > > > > - PESSIMISTIC && !READ_COMMITTED transaction > > > -- checks values across the topology (under locks), > > > -- finds correct values according to LWW strategy, > > > -- records special event in case consistency violation found (contains > > > inconsistent map <Node, <K,V>> and last values <K,V>), > > > -- enlists writes with latest value for each inconsistent key, so it will > > > be written on tx.commit(). > > > > > > - OPTIMISTIC || READ_COMMITTED transactions > > > -- checks values across the topology (not under locks, so false-positive > > > case is possible), > > > -- starts PESSIMISTIC && SERIALIZABLE (at separate thread) transaction > > > > for > > > each possibly broken key and fixes it on a commit if necessary. > > > -- original transaction performs get-after-fix and can be continued if > > > > the > > > fix does not conflict with isolation level. > > > > > > Future plans > > > - Consistency guard (special process periodically checks we have no > > > inconsistency). > > > - MVCC support. > > > - Atomic caches support. > > > - Thin client support. > > > - SQL support. > > > - Read-with-consistency before the write operation. > > > - Single key read-with-consistency optimization, now the collection > > > approach used each time. > > > - Do not perform read-with-consistency for the key in case it was > > > consistently read some time ago. > > > > > > [1] > > > > > > > > > > https://cwiki.apache.org/confluence/display/IGNITE/IEP-31+Consistency+check+and+fix > > > [2] https://issues.apache.org/jira/browse/IGNITE-10663 > > > [3] https://github.com/apache/ignite/pull/5656 > > >
signature.asc
Description: This is a digitally signed message part