Vyacheskav, You're right with the referring to MongoDB doc. In general the idea is very similar. Many vendors use such approach (1).
[1] https://dev.mysql.com/doc/refman/8.0/en/replication-options-master.html#sysvar_rpl_semi_sync_master_wait_for_slave_count On Thu, Apr 25, 2019 at 6:40 PM Vyacheslav Daradur <daradu...@gmail.com> wrote: > Hi, Sergey, > > Makes sense to me in case of performance issues, but may lead to losing > data. > > >> *by the new option *syncPartitions=N* (not best name just for referring) > > Seems similar to "Write Concern"[1] in MongoDB. It is used in the same > way as you described. > > On the other hand, if you have such issues it should be investigated > first: why it causes performance drops: network issues etc. > > [1] https://docs.mongodb.com/manual/reference/write-concern/ > > On Thu, Apr 25, 2019 at 6:24 PM Sergey Kozlov <skoz...@gridgain.com> > wrote: > > > > Ilya > > > > See comments inline. > > On Thu, Apr 25, 2019 at 5:11 PM Ilya Kasnacheev < > ilya.kasnach...@gmail.com> > > wrote: > > > > > Hello! > > > > > > When you have 2 backups and N = 1, how will conflicts be resolved? > > > > > > > > Imagine that you had N = 1, and primary node failed immediately after > > > operation. Now you have one backup that was updated synchronously and > one > > > which did not. Will they stay unsynced, or is there any mechanism of > > > re-syncing? > > > > > > > Same way as Ignite processes the failures for PRIMARY_SYNC. > > > > > > > > > > Why would one want to "update for 1 primary and 1 backup synchronously, > > > update the rest of backup partitions asynchronously"? What's the use > case? > > > > > > > The case to have more backups but do not pay the performance penalty for > > that :) > > For the distributed systems one backup looks like risky. But more backups > > directly impacts to performance. > > Other point is to split the strict consistent apps like bank apps and the > > other apps like fraud detection, analytics, reports and so on. > > In that case you can configure partitions distribution by a custom > affinity > > and have following: > > - first set of nodes for critical (from consistency point) operations > > - second set of nodes have async backup partitions only for other > > operations (reports, analytics) > > > > > > > > > > > > Regards, > > > -- > > > Ilya Kasnacheev > > > > > > > > > чт, 25 апр. 2019 г. в 16:55, Sergey Kozlov <skoz...@gridgain.com>: > > > > > > > Igniters > > > > > > > > I'm working with the wide range of cache configurations and found > (from > > > my > > > > standpoint) the interesting point for the discussion: > > > > > > > > Now we have following *writeSynchronizationMode *options: > > > > > > > > 1. *FULL_ASYNC* > > > > - primary partition updated asynchronously > > > > - backup partitions updated asynchronously > > > > 2. *PRIMARY_SYNC* > > > > - primary partition updated synchronously > > > > - backup partitions updated asynchronously > > > > 3. *FULL_SYNC* > > > > - primary partition updated synchronously > > > > - backup partitions updated synchronously > > > > > > > > The approach above is covering everything if you've 0 or 1 backup. > > > > But for 2 or more backups we can't reach the following case > (something > > > > between *PRIMARY_SYNC *and *FULL_SYNC*): > > > > - update for 1 primary and 1 backup synchronously > > > > - update the rest of backup partitions asynchronously > > > > > > > > The idea is to join all current modes into single one and replace > > > > *writeSynchronizationMode > > > > *by the new option *syncPartitions=N* (not best name just for > referring) > > > > covers the approach: > > > > > > > > - N = 0 means *FULL_ASYNC* > > > > - N = (backups+1) means *FULL_SYNC* > > > > - 0 < N < (backups+1) means either *PRIMARY_SYNC *(N=1) or new > mode > > > > described above > > > > > > > > IMO it will allow to make more flexible and consistent configurations > > > > > > > > -- > > > > Sergey Kozlov > > > > GridGain Systems > > > > www.gridgain.com > > > > > > > > > > > > > -- > > Sergey Kozlov > > GridGain Systems > > www.gridgain.com > > > > -- > Best Regards, Vyacheslav D. > -- Sergey Kozlov GridGain Systems www.gridgain.com