Yes, we'll add this validation.
On Wed, Sep 20, 2017 at 10:09 AM, Dmitriy Setrakyan
wrote:
> On Tue, Sep 19, 2017 at 11:31 PM, Semyon Boikov
> wrote:
>
> > > Can caches within the same group have different MVCC configuration?
> >
> > Do you think we really
On Tue, Sep 19, 2017 at 11:31 PM, Semyon Boikov
wrote:
> > Can caches within the same group have different MVCC configuration?
>
> Do you think we really need have in the same group caches with different
> mvcc configuration? for simplicity I would do not allow this.
>
I agre
> Can caches within the same group have different MVCC configuration?
I think it is possible to implement, but there are some issues:
- for mvcc we need store mvcc version in hash index item (for now it is 16
bytes), since index items have fixed size then if we store in this index
data for cac
Can caches within the same group have different MVCC configuration?
On Tue, Sep 19, 2017 at 2:34 AM, Vladimir Ozerov
wrote:
> What I mean is that it might be not applicable for DML by design. E.g. may
> be we will have to fallback to per-memory-policy approach, or to
> per-cache-group.
Guys, I we should additionally provide ability to manually switch mvcc
coordinator via MBean passing order or ID of a new one. We already have all
the machinery for this.
--Yakov
What I mean is that it might be not applicable for DML by design. E.g. may
be we will have to fallback to per-memory-policy approach, or to
per-cache-group. As we do not know it at the moment and there is no clear
demand from users, I would simply put it aside to avoid in mess in public
API in futu
If it is not valid for DML then we can easily detect this situation and
throw exception, but if I do not use DML why non make it configurable
per-cache?
On Tue, Sep 19, 2017 at 12:22 PM, Vladimir Ozerov
wrote:
> I would say that per-cache configuration should be out of scope as well for
> the fi
I would say that per-cache configuration should be out of scope as well for
the first iteration. Because we do not know whether it will be valid for
DML.
On Tue, Sep 19, 2017 at 12:15 PM, Semyon Boikov
wrote:
> Folks, thank you for feedback, I want to summarize some decisions:
>
> 1. Mvcc is dis
Folks, thank you for feedback, I want to summarize some decisions:
1. Mvcc is disabled by default. We'll add two flags to enable mvcc:
per-cache flag - CacheConfiguration.isMvccEnabled, default value for all
caches - IgniteConfiguration.isMvccEnabled.
2. For initial implementation mvcc for ATOMIC
This could be something like "preferredMvccCoordinator".
On Tue, Sep 19, 2017 at 10:40 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:
> >
> > I agree that we need coordinator nodes, but I do not understand why can't
> > we reuse some cache nodes for it? Why do we need to ask user to st
>
> I agree that we need coordinator nodes, but I do not understand why can't
> we reuse some cache nodes for it? Why do we need to ask user to start up
> yet another type of node?
>
Dmitriy,
My understanding is that Semyon does not deny a cache node to be used as a
coordinator. This property wil
or several caches in the same cache group to have different
MVCC configuration?
> 2. In current mvcc architecture there should be some node in cluster
> assigning versions for tx updates and queries (mvcc coordinator). Mvcc
> coordinator is crucial component and it should perform as fast as
On Mon, Sep 18, 2017 at 4:57 AM, Vladimir Ozerov
wrote:
> Alex,
>
> With putAll() on ATOMIC cache all bets are off, for sure.
>
Are we all in agreement that MVCC should only be enabled for transactional
caches then?
Semyon,
How about to have node attribute "COORDINATOR_RANK" or "COORDINATOR_ORDER"?
This attribute can be 1, 2, 3
And node with minimal number will become coordinator.
If it failed, node with next rank/order will be elected as new coordinator.
Make sense?
On Mon, Sep 18, 2017 at 4:39 PM, Sem
Vladimir, thanks for comments
> 2) I would also avoid this flag until we clearly understand it is needed.
> All numbers will be assigned from a single thread. For this reason even
> peak load on coordinator should not consume too much resources. I think we
> can assign coordinators automatically i
Nikolay, thanks for comments
> How will Ignite handle "mvcc coordinator" fail?
> What will happen with if coordinator fails in the middle of a transaction?
> Could tx be committed or rollbacked?
I think coordinator failure will be handled in the same way as failure of
one of transaction's 'prima
Alex,
With putAll() on ATOMIC cache all bets are off, for sure.
On Mon, Sep 18, 2017 at 2:53 PM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:
> Vladimir,
>
> I doubt it will be possible to add any meaningful guarantees to ATOMIC
> caches with MVCC. Consider a case when a user does a put
Vladimir,
I doubt it will be possible to add any meaningful guarantees to ATOMIC
caches with MVCC. Consider a case when a user does a putAll, not a single
put. In this case, updates received by multiple primary nodes are not
connected in any way. Moreover, whenever a primary node fails, the put fo
Yakov,
I would say that my example is not about adding transactions to ATOMIC
cache, but rather about adding consistent snapshots to it.
On Mon, Sep 18, 2017 at 1:59 PM, Yakov Zhdanov wrote:
> Vladimir, I think we can ask user to switch to transactional cache to
> support your example. Otherwis
Vladimir, I think we can ask user to switch to transactional cache to
support your example. Otherwise, it seems we are turning atomic caches to
tx implicitly.
--Yakov
2017-09-18 13:49 GMT+03:00 Vladimir Ozerov :
> Semen,
>
> Consider use case of some audit table where I log user actions over tim
Semen,
Consider use case of some audit table where I log user actions over time.
Every actions is a put to ATOMIC cache. User interacts with my application,
and performs the following set of actions:
1. 08:00 MSK -> LOGIN
2. 08:10 MSK -> Update something
3. 08:20 MSK -> LOGUT
If MVCC is there, wh
Guys,
I do not really understand mvcc for atomic cache, could you please provide
some real use case.
Thank you
On Mon, Sep 18, 2017 at 1:37 PM, Yakov Zhdanov wrote:
> Ouch... of course it makes sense for atomic caches. Seems I am not fully
> switched on after weekend =)
>
> Agree on other poin
Ouch... of course it makes sense for atomic caches. Seems I am not fully
switched on after weekend =)
Agree on other points.
--Yakov
Yakov,
MVCC for atomic caches makes sense as well - we will be able to read
consistent data set, which is not possible now. As I explained above,
per-cache configuration might not work when we start working on
transactional SQL design.
Moreover, it looks like an overkill for me at the moment. We
Vladimir, should it be on IgniteConfiguration or on CacheConfiguration? I
think mvcc should be enabled on per cache basis and moreover it makes sense
only for tx caches.
--Yakov
Semen,
My comments:
1) I would propose to have only global flag for now -
IgniteConfiguration.isMvccEnabled.
One key design point we should keep in mind is that MVCC data *MSUT* be
persistent. We can skip it in the first iteration, as we are focused on
key-based cache updates, when typical transac
1. Agree. Let's disable MVCC by default.
2. Sam, if user wants to have dedicated mvcc-coordinator, then we can use
configuration you suggested. However, I expect more properties will be
needed. How about having MvccConfiguration bean? Once topology has no
dedicated coordinators, topology should pic
Hello, Semyon!
> It seems we need introduce special 'dedicated mvcc coordinator' node role
How will Ignite handle "mvcc coordinator" fail?
What will happen with if coordinator fails in the middle of a transaction?
Could tx be committed or rollbacked?
Will we have some user notification if coord
Hi all,
Currently I'm working on MVCC feature (IGNITE-3478) and need your opinion
on related configuration options.
1. MVCC will definitely bring some performance overhead, so I think it
should not be enabled by default, I'm going to add special flag on cache
configuration: CacheConfiguration.isM
29 matches
Mail list logo