Ivan,

probably you are right. The main usage of in-memory cache with allowed evictions is a caching layer for third party stores. And the fact that page eviction is not a transactional process by the nature, forces user to use workarounds (i.e. explicit locking) to prevent evictions of the hot data. This workaround is quite a heavyweight solution as well. Using explicit locking (select for update, etc.) for each read request may lead to the increased number of aborted transactions due to the need to obtain an exclusive lock for each key we read. The repeatable read semantics for MVCC caches with evictions will result to the performance drop, which makes useless this application of MVCC caches.

Perhaps, we should prohibit MVCC caches creations in regions with configured eviction policy, as you proposed?

Igor, Vladimir, what do you think?


--
Kind Regards
Roman Kondakov

On 17.12.2018 8:53, Павлухин Иван wrote:
Roman,

Thank you for pointing out usage as an in-memory cache. I will try to
describe how do I see the use case.

First of all our MVCC caches provides transactions. And a user will
choose MVCC if his workflow is transactional. If a use case is a
caching layer then some backing storage is assumed. But we have not
yet well integrated support for 3rd party persistence [1]. And I think
that it is better to cover whole track in complex.

Of course there might be another valid use cases which I am not aware
of. Please point me out if you have one in mind.

[1] https://apacheignite.readme.io/docs/3rd-party-store

2018-12-14 18:40 GMT+03:00, Seliverstov Igor <gvvinbl...@gmail.com>:
Roman,

I would prefer first option.

The fact user uses MVCC says he needs some more strict guaranties which
cannot meet in other modes.
I would rollback both txs in case we cannot provide such guaranties.

Regards,
Igor

пт, 14 дек. 2018 г. в 15:36, Roman Kondakov <kondako...@mail.ru.invalid>:

Vladimir,

I was thinking about your proposal to not evict locked and recent (the
transaction that created the record is still active) entries from the
cache. Let's imagine next situation: we have almost full memory and two
transactions:

1. txA: "SELECT * FOR UPDATE"

2. txB: "INSERT ...many keys here..."

In this case txA locks all entries in the cache, and therefore we cannot
evict any of them. If then txB is trying to add a lot of data, it lead
us to the OOM situation, which user is trying to avoid using cache
evictions.

I see two ways how to deal with this issue:

1. Allow OOM in MVCC caches with configured evictions and warn user
about it in the docs.

2. Give up with the repeatable read guaranties in case of evictions for
MVCC caches and warn users about it in the documentation.

Second variant looks better for me because user may not expect OOM when
he has configured eviction policy for cache.

What do you think?


--
Kind Regards
Roman Kondakov

On 13.12.2018 22:33, Vladimir Ozerov wrote:
It's hard to believe that entries are not locked on backups, because we
wrtite data right away. Even if it so, it should be very easy to fix -
just
do not evict and entry if it was created or deleted by currently active
transaction.

On Thu, Dec 13, 2018 at 10:28 PM Roman Kondakov
<kondako...@mail.ru.invalid>
wrote:

Vladimir,

we do not lock entries on backups when MVCC is enabled and therefore
we
don't avoid entry eviction from backup by locking. So, your first
scenario with primary stop is still relevant.


--
Kind Regards
Roman Kondakov

On 13.12.2018 22:14, Vladimir Ozerov wrote:
No, I mean that we should think about what kind of guarantees it
possible.
My proposal was to prevent evictions of locked entries. This way we
can
say
users: "if you want true REPEATABLE_READ when evictions are enabled,
then
make sure to lock entries on every access". This effectively means
that
all
SELECT's should be replaced with "SELECT FOR UPDATE".

On Thu, Dec 13, 2018 at 10:09 PM Roman Kondakov
<kondako...@mail.ru.invalid>
wrote:

Vladimir,

correct me please if i misunderstood your thought. So, if eviction
is
not about a consistency at all, we may evict keys in any way because
broken repeatable read semantics is not the biggest problem here.
But
we
should add some notes about it to user documentation. Right?


--
Kind Regards
Roman Kondakov

On 13.12.2018 17:45, Vladimir Ozerov wrote:
Roman,

I would start with the fact that eviction can never be consistent
unless
it
utilizes atomic broadcast protocol, which is not the case for
Ignite.
In
Ignite entries on node are evicted independently.

So you may easily get into situation like this:
1) Start a cache with 1 backup and FULL_SYNC mode
2) Put a key to primary node
3) Stop primary
4) Try reading from new primary and get null because key was
evicted
concurrently

Or:
1) Start a transaction in PESSIMISTIC/READ_COMMITTED mode
2) Read a key, get value
3) Read the same key again, get null

So in reality the choice is not between consistent and inconsistent
behavior, but rather about degree of inconsistency. Any solution is
possible as long as we can explain it to the user. E.g. "do not
evict a
key
if it is either write-locked".


On Thu, Dec 13, 2018 at 5:19 PM Vladimir Ozerov <
voze...@gridgain.com>
wrote:

Andrey,

We will not be able to cache the whole data set locally, as it
potentially
lead to OOME. We will have this only as an option and only for
non-SQL
updates. Thus, similar semantics is not possible.

On Thu, Dec 13, 2018 at 4:56 PM Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

Roman,

We have a ticket to improve repeatable_read mode [1] via caching
entries
locally.
This should make mvcc transaction repeatable_read semantic
similar
to
non-mvcc Txs
and allow us to implement eviction in correct way.

Another way is to introduce mvcc shared (read) entry locks and
evict
only
entries if no one hold any lock on it,
but this looks tricky and error prone as your first one as it may
lead
to
eviction policy unexpected behavior,
e.g some versions can be visible while others - no (evicted).

[1] https://issues.apache.org/jira/browse/IGNITE-7371

On Thu, Dec 13, 2018 at 4:34 PM Ilya Kasnacheev <
ilya.kasnach...@gmail.com>
wrote:

Hello!

Is it possible to 'touch' entries read by MVCC transactions to
ensure
that
they are considered recent and therefore are almost never
targeted
by
eviction?

This is 1) with benefits.

Regards,
--
Ilya Kasnacheev


чт, 13 дек. 2018 г. в 16:22, Roman Kondakov
<kondako...@mail.ru.invalid
:

Hi igniters,

I need your advice with the following issue. When in-memory
cache
reaches it's memory limit, some data may be purged to avoid OOM
error.
This process is described in [1]. For MVCC caches this eviction
may
break repeatable read semantics. For example, if transaction
reads
key
before eviction, this key is visible for it. But if key is
evicted
some
time later, this key become invisible to anyone, including our
transaction, which means broken repeatable read semantics.

Now we see the following solutions of this problem:

1. Ignore broken repeatable read semantics. If cache is set to
allow
data eviction, it may lose it's data. This means that there is
no
valuable information stored in cache and occasional repeatable
read
violations can be tolerated.

2. Prohibit eviction of MVCC caches at all. For example, stop
writing
to
caches and throw an appropriate exception in the case when
there
is
no
free space in page memory. Before this exception Ignite should
do
it's
best to avoid this situation, for example, evict all non-mvcc
caches
and
run full vacuum to free as much space as possible.

First approach is bad because it leads to cache consistency
violation.
Second approach is bad because it's behavior may be unexpected
to
user
if he has set an eviction policy for cache, but instead of
eviction
Ignite trying to avoid it as much as possible.

IMO first approach looks better - it is much simpler to
implement
and
met user expectations in all points except possible repeatable
read
violations.

What do you think?


[1] https://apacheignite.readme.io/docs/evictions

--
Kind Regards
Roman Kondakov


--
Best regards,
Andrey V. Mashenkov


Reply via email to