Re: [DISCUSSION] Add reviewer field to Apache Ignite JIRA project

2019-05-15 Thread Dmitriy Pavlov
Hi Igniters,

Reviewer field has been added, feel free to set your JIRA username for
issues you're going to review.

If you had a private conversation with a contributor/committer and he/she
is going to review, please set his/her name.

I discourage to set someone's username who is not going to review ticket to
reviewer field. This field is not intended for requesting a review. Use
mentions in that case.

Sincerely,
Dmitriy Pavlov

ср, 15 мая 2019 г. в 17:15, Dmitry Pavlov :

> Infra request was created:
> https://issues.apache.org/jira/browse/INFRA-18378
>
> On 2019/02/13 12:38:06, Dmitriy Pavlov  wrote:
> > Igniters, is it still reasonable to add a reviewer field now?
> >
> > AFAIK, count of PA tickets (our review dept) is less than it was when the
> > topic is started. So this proposal can be not actual anymore.
> >
> > If you agree, please consider picking up this ticket and contact INFRA
> for
> > adding the field.
> > If not, let's close this discussion as not needed
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > чт, 27 сент. 2018 г. в 18:39, Dmitriy Pavlov :
> >
> > > Hi Anton,
> > >
> > > Thank you for bringing this significant concern here.
> > >
> > > I'm going to use this field in total correspondence with assignee field
> > > usage. We don't set assignee unless someone agrees to be a developer
> for
> > > that feature.
> > >
> > > Otherwise, it is better to keep an issue as unassigned. Same implies to
> > > the reviewer field.
> > >
> > > So reviewer is someone, who is ready and going to do the review.
> Unless we
> > > not sure who will do a review, mention process continues to work.
> > >
> > > Sincerely,
> > > Dmitriy Pavlov
> > >
> > >
> > >
> https://lists.apache.org/thread.html/c6013b99940de32aae831a0b76e8fd53febe5040e9e0d67abb4f62a5@%3Cdev.community.apache.org%3E
> > >
> > >
> > >
> > > чт, 27 сент. 2018 г. в 18:23, Anton Vinogradov :
> > >
> > >> Currently, you may ask for a review by mention someone and asking him
> to
> > >> review.
> > >> And this approach looks good to me.
> > >>
> > >> In case we'll invent reviewer field who will set the reviewer?
> > >> It's NOT ok to set somebody as a reviewer!
> > >> You should ask somebody to be a reviewer first.
> > >> And in case he agrees he will just make a review. No reason to set a
> > >> useless field in that case.
> > >>
> > >> вт, 25 сент. 2018 г. в 19:39, Dmitriy Setrakyan <
> dsetrak...@apache.org>:
> > >>
> > >> > I like the idea.
> > >> >
> > >> > On Tue, Sep 25, 2018 at 8:25 AM Dmitriy Pavlov <
> dpavlov@gmail.com>
> > >> > wrote:
> > >> >
> > >> > > Hi Ignite Enthusiasts,
> > >> > >
> > >> > > During the planning of release 2.7, I've faced with the situation
> > >> when it
> > >> > > is completely not clear who is going to review ticket.
> > >> > >
> > >> > > Usually, we do not reassign tickets to a reviewer, but info about
> > >> planned
> > >> > > reviewer can be very useful for all reviewers, who select some
> > >> > contribution
> > >> > > to pick up into a review.
> > >> > >
> > >> > > Please share your vision about the idea of adding a reviewer field
> > >> (type:
> > >> > > user) in addition to the assignee field.
> > >> > >
> > >> > > If we agree I will try to ask the Infra team on Friday 28.09.
> > >> > >
> > >> > > Sincerely,
> > >> > > Dmitriy Pavlov
> > >> > >
> > >> >
> > >>
> > >
> >
>


Re: [DISCUSSION] Add reviewer field to Apache Ignite JIRA project

2019-05-15 Thread Dmitry Pavlov
Infra request was created: https://issues.apache.org/jira/browse/INFRA-18378

On 2019/02/13 12:38:06, Dmitriy Pavlov  wrote: 
> Igniters, is it still reasonable to add a reviewer field now?
> 
> AFAIK, count of PA tickets (our review dept) is less than it was when the
> topic is started. So this proposal can be not actual anymore.
> 
> If you agree, please consider picking up this ticket and contact INFRA for
> adding the field.
> If not, let's close this discussion as not needed
> 
> Sincerely,
> Dmitriy Pavlov
> 
> чт, 27 сент. 2018 г. в 18:39, Dmitriy Pavlov :
> 
> > Hi Anton,
> >
> > Thank you for bringing this significant concern here.
> >
> > I'm going to use this field in total correspondence with assignee field
> > usage. We don't set assignee unless someone agrees to be a developer for
> > that feature.
> >
> > Otherwise, it is better to keep an issue as unassigned. Same implies to
> > the reviewer field.
> >
> > So reviewer is someone, who is ready and going to do the review. Unless we
> > not sure who will do a review, mention process continues to work.
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> >
> > https://lists.apache.org/thread.html/c6013b99940de32aae831a0b76e8fd53febe5040e9e0d67abb4f62a5@%3Cdev.community.apache.org%3E
> >
> >
> >
> > чт, 27 сент. 2018 г. в 18:23, Anton Vinogradov :
> >
> >> Currently, you may ask for a review by mention someone and asking him to
> >> review.
> >> And this approach looks good to me.
> >>
> >> In case we'll invent reviewer field who will set the reviewer?
> >> It's NOT ok to set somebody as a reviewer!
> >> You should ask somebody to be a reviewer first.
> >> And in case he agrees he will just make a review. No reason to set a
> >> useless field in that case.
> >>
> >> вт, 25 сент. 2018 г. в 19:39, Dmitriy Setrakyan :
> >>
> >> > I like the idea.
> >> >
> >> > On Tue, Sep 25, 2018 at 8:25 AM Dmitriy Pavlov 
> >> > wrote:
> >> >
> >> > > Hi Ignite Enthusiasts,
> >> > >
> >> > > During the planning of release 2.7, I've faced with the situation
> >> when it
> >> > > is completely not clear who is going to review ticket.
> >> > >
> >> > > Usually, we do not reassign tickets to a reviewer, but info about
> >> planned
> >> > > reviewer can be very useful for all reviewers, who select some
> >> > contribution
> >> > > to pick up into a review.
> >> > >
> >> > > Please share your vision about the idea of adding a reviewer field
> >> (type:
> >> > > user) in addition to the assignee field.
> >> > >
> >> > > If we agree I will try to ask the Infra team on Friday 28.09.
> >> > >
> >> > > Sincerely,
> >> > > Dmitriy Pavlov
> >> > >
> >> >
> >>
> >
> 


Re: Consistency check and fix (review request)

2019-05-15 Thread Anton Vinogradov
Ivan,

1) Currently, we have idle_verify [1] feature allows us to check
consistency guarantee is still respected.
But, idle_verify has a big constraint, the cluster should be at idle mode
(no load).
This feature, actually, will do almost the same but allows you to have any
load.

2) Why we need this? Because of bugs.
For example, currently, we have issue [2] with consistency violation on
node fail under load.
The issue will be fixed, definitely, but, is there any guarantee we'll not
face with another?
This feature is a simple way allows you to check (and fix) consistency in
case you want such additional check.
Just an additional failover. Trust but check and recheck :)

3) Use cases?
There are two approaches:
- you can permanently rescan (in a cyclic way) all the entries you have,
using this feature, to make sure they are not broken (background case)
- you can use this feature on each or *-th request (foreground case)

[1]
https://apacheignite-tools.readme.io/docs/control-script#section-verification-of-partition-checksums
[2] https://issues.apache.org/jira/browse/IGNITE-10078

On Thu, May 9, 2019 at 9:09 AM Ivan Pavlukhina  wrote:

> Hi Anton,
>
> Meanwhile can we extend a feature description from a user point of view?
> It would be good to provide some use cases when it can used.
>
> The thing that is yet understood by me is a conflict resolving. E.g. in
> systems inspired by Dynamo (sorry that no references, writing from phone)
> inconsistency is an expected system behavior and users are aware of it and
> choose reconciliation strategy accordingly. But in our case inconsistency
> is an exceptional case. And it is hard for me to imagine a case when we can
> resolve conflicts automatically while expecting no conflicts. Can you help
> me with it?
>
> Sent from my iPhone
>
> > On 25 Apr 2019, at 16:25, Anton Vinogradov  wrote:
> >
> > Folks,
> >
> > Just an update.
> > According to all your tips I decided to refactor API, logic, and approach
> > (mostly everything :)),
> > so, currently refactoring is in progress and you may see inconsistent PR
> > state.
> >
> > Thanks to everyone involved for your tips, review and etc.
> > I'll provide a proper presentation once refactoring will be finished.
> >
> >> On Tue, Apr 16, 2019 at 2:20 PM Anton Vinogradov  wrote:
> >>
> >> Nikolay, that was the first approach
> >>
>  I think we should allow to the administrator to enable/disable
> >> Consistency check.
> >> In that case, we have to introduce cluster-wide change-strategy
> operation,
> >> since every client node should be aware of the change.
> >> Also, we have to specify caches list, and for each - should we check
> each
> >> request or only 5-th and so on.
> >> Procedure and configuration become overcomplicated in this case.
> >>
> >> My idea that specific service will be able to use a special proxy
> >> according to its own strategy
> >> (eg. when administrator inside the building and boss is sleeping - all
> >> operations on "cache[a,b,c]ed*" should check the consistency).
> >> All service clients will have the same guarantees in that case.
> >>
> >> So in other words, consistency should be guaranteed by service, not by
> >> Ignite.
> >> Service should guarantee consistency not only using new proxy but, for
> >> example, using correct isolation fo txs.
> >> That's not a good Idea to specify isolation mode for Ignite, same
> >> situation with get-with-consistency-check.
> >>
> >> On Tue, Apr 16, 2019 at 12:56 PM Nikolay Izhikov 
> >> wrote:
> >>
> >>> Hello, Anton.
> >>>
>  Customer should be able to change strategy on the fly according to
> >>> time> periods or load.
> >>>
> >>> I think we should allow to administrator to enable/disable Consistency
> >>> check.
> >>> This option shouldn't be related to application code because
> "Consistency
> >>> check" is some kind of maintance procedure.
> >>>
> >>> What do you think?
> >>>
> >>> В Вт, 16/04/2019 в 12:47 +0300, Anton Vinogradov пишет:
>  Andrey, thanks for tips
> 
> >> You can perform consistency check using idle verify utility.
> 
>  Could you please point to utility's page?
>  According to its name, it requires to stop the cluster to perform the
> >>> check?
>  That's impossible at real production when you should have downtime
> less
>  that some minutes per year.
>  So, the only case I see is to use online check during periods of
> >>> moderate
>  activity.
> 
> >> Recovery tool is good idea
> 
>  This tool is a part of my IEP.
>  But recovery tool (process)
>  - will allow you to check entries in memory only (otherwise, you will
> >>> warm
>  up the cluster incorrectly), and that's a problem when you have
>  persisted/in_memory rate > 10:1
>  - will cause latency drop for some (eg. 90+ percentile) requests,
> which
> >>> is
>  not acceptable for real production, when we have strict SLA.
>  - will not guarantee that each operation will use consistent data,
>  

Re: AI 3.0: writeSynchronizationMode re-thinking

2019-05-15 Thread Anton Vinogradov
Sergey,

Sorry for the late response.

I'd think twice or more :) and discussed the idea with Alexey G.
Seems, we have no reason to check some backups since AI should guarantee it
has everything consistent.
But, it useful to have some feature to check this guarantee is not bluff.

BTW, I'm still ok with your proposal.

On Tue, Apr 30, 2019 at 9:24 PM Sergey Kozlov  wrote:

> Anton
>
> I'm Ok with your proposal but IMO  it should be provided as IEP?
>
> On Mon, Apr 29, 2019 at 4:05 PM Anton Vinogradov  wrote:
>
> > Sergey,
> >
> > I'd like to continue the discussion since it closely linked to the
> problem
> > I'm currently working on.
> >
> > 1) writeSynchronizationMode should not be a part of cache configuration,
> > agree.
> > This should be up to the user to decide how strong should be "update
> > guarantee".
> > So, I propose to have a special cache proxy, .withBlaBla() (at 3.x).
> >
> > 2) Primary fail on !FULL_SYNC is not the single problem leads to an
> > inconsistent state.
> > Bugs and incorrect recovery also cause the same problem.
> >
> > Currently, we have a solution [1] to check cluster to be consistent, but
> it
> > has a bad resolution (will tell you only what partitions are broken).
> > So, to find the broken entries you need some special API, which will
> check
> > all copies and let you know what's went wrong.
> >
> > 3) Since we mostly agree that write should affect some backups in sync
> way,
> > how about to have similar logic for reading?
> >
> > So, I propose to have special proxy .withQuorumRead(backupsCnt) which
> will
> > check the explicit amount of backups on each read and return you the
> latest
> > values.
> > This proxy already implemented [2] for all copies, but I'm going to
> extend
> > it with explicit backups number.
> >
> > Thoughts?
> >
> > 3.1) Backups can be checked in two ways:
> > - request data from all backups, but wait for explicit number (solves the
> > slow backup issue, but produce traffic)
> > - request data from an explicit number of backups (less traffic, but can
> be
> > as slow as all copies check case)
> > what strategy is better? Should it be configurable?
> >
> > [1]
> >
> >
> https://apacheignite-tools.readme.io/docs/control-script#section-verification-of-partition-checksums
> > [2] https://issues.apache.org/jira/browse/IGNITE-10663
> >
> > On Thu, Apr 25, 2019 at 7:04 PM Sergey Kozlov 
> > wrote:
> >
> > > There's another point to improve:
> > > if  *syncPartitions=N* comes as the configurable in run-time it will
> > allow
> > > to manage the consistency-performance balance runtime, e.g. switch to
> > full
> > > async for preloading and then go to back to full sync for regular
> > > operations
> > >
> > >
> > > On Thu, Apr 25, 2019 at 6:48 PM Sergey Kozlov 
> > > wrote:
> > >
> > > > Vyacheskav,
> > > >
> > > > You're right with the referring to MongoDB doc. In general the idea
> is
> > > > very similar. Many vendors use such approach (1).
> > > >
> > > > [1]
> > > >
> > >
> >
> https://dev.mysql.com/doc/refman/8.0/en/replication-options-master.html#sysvar_rpl_semi_sync_master_wait_for_slave_count
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Thu, Apr 25, 2019 at 6:40 PM Vyacheslav Daradur <
> > daradu...@gmail.com>
> > > > wrote:
> > > >
> > > >> Hi, Sergey,
> > > >>
> > > >> Makes sense to me in case of performance issues, but may lead to
> > losing
> > > >> data.
> > > >>
> > > >> >> *by the new option *syncPartitions=N* (not best name just for
> > > >> referring)
> > > >>
> > > >> Seems similar to "Write Concern"[1] in MongoDB. It is used in the
> same
> > > >> way as you described.
> > > >>
> > > >> On the other hand, if you have such issues it should be investigated
> > > >> first: why it causes performance drops: network issues etc.
> > > >>
> > > >> [1] https://docs.mongodb.com/manual/reference/write-concern/
> > > >>
> > > >> On Thu, Apr 25, 2019 at 6:24 PM Sergey Kozlov  >
> > > >> wrote:
> > > >> >
> > > >> > Ilya
> > > >> >
> > > >> > See comments inline.
> > > >> > On Thu, Apr 25, 2019 at 5:11 PM Ilya Kasnacheev <
> > > >> ilya.kasnach...@gmail.com>
> > > >> > wrote:
> > > >> >
> > > >> > > Hello!
> > > >> > >
> > > >> > > When you have 2 backups and N = 1, how will conflicts be
> resolved?
> > > >> > >
> > > >> >
> > > >> > > Imagine that you had N = 1, and primary node failed immediately
> > > after
> > > >> > > operation. Now you have one backup that was updated
> synchronously
> > > and
> > > >> one
> > > >> > > which did not. Will they stay unsynced, or is there any
> mechanism
> > of
> > > >> > > re-syncing?
> > > >> > >
> > > >> >
> > > >> > Same way as Ignite processes the failures for PRIMARY_SYNC.
> > > >> >
> > > >> >
> > > >> > >
> > > >> > > Why would one want to "update for 1 primary and 1 backup
> > > >> synchronously,
> > > >> > > update the rest of backup partitions asynchronously"? What's the
> > use
> > > >> case?
> > > >> > >
> > > >> >
> > > >> > The case to have more backups but do not 

Wrong author name

2019-05-15 Thread Petr Ivanov


Hi, 张俊锋


Could you rename yourself in latin, please?
TeamCity does not recognise Chinese in names and commit comments.
Your commit [1] spoiled the build queue.


Thanks!


[1] 
https://github.com/apache/ignite/pull/4791/commits/26891b4bab1e439580e20a6771a2106c6929d363