Valya,

And this is the problem! ) We have another implementation which minimizes
movements - rendezvous. Fair affinity should not bother about partition
migration. This is not a feature, but a bug in implementation of
FairAffinityFunction. Let's fix that and forget about topology versions.

ср, 16 авг. 2017 г. в 3:23, Valentin Kulichenko <
valentin.kuliche...@gmail.com>:

> Vladimir,
>
> I would let other guys confirm, but I believe the reason is that if it
> recalculates distribution every time from scratch, it would trigger too
> much redundant data movement during rebalancing. Fair function not only
> tries to provide best possible distribution, but also minimizes this data
> movement, and for this it uses previous distribution.
>
> -Val
>
> On Tue, Aug 15, 2017 at 1:12 PM, Vladimir Ozerov <voze...@gridgain.com>
> wrote:
>
> > I do not like the idea as it would make it very hard to reason about
> > whether your SQL will fail or not. Let's looks at the problem from the
> > different angle. I have this question for years - why in the world *fair*
> > affinity function, whose only ultimate goal is to provide equal partition
> > distribution, depends on it's own previous state? Can we re-design in a
> way
> > that it depends only on partition count and current topology state?
> >
> > On Thu, Aug 10, 2017 at 12:16 AM, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com> wrote:
> >
> > > As far as I know, all logical caches with the same affinity function
> and
> > > node filter will end up in the same group. If that's the case, I like
> the
> > > idea. This is exactly what I was looking for.
> > >
> > > -Val
> > >
> > > On Wed, Aug 9, 2017 at 8:18 AM, Evgenii Zhuravlev <
> > > e.zhuravlev...@gmail.com>
> > > wrote:
> > >
> > > > Dmitriy,
> > > >
> > > > Yes, you're right. Moreover, it looks like a good practice to combine
> > > > caches that will be used for collocated JOINs in one group since it
> > > reduces
> > > > overall overhead.
> > > >
> > > > I think it's not a problem to add this restriction to the SQL JOIN
> > level
> > > if
> > > > we will decide to use this solution.
> > > >
> > > > Evgenii
> > > >
> > > >
> > > >
> > > >
> > > > 2017-08-09 17:07 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org
> >:
> > > >
> > > > > On Wed, Aug 9, 2017 at 6:28 AM, ezhuravl <e.zhuravlev...@gmail.com
> >
> > > > wrote:
> > > > >
> > > > > > Folks,
> > > > > >
> > > > > > I've started working on a https://issues.apache.org/
> > > > > > jira/browse/IGNITE-5836
> > > > > > ticket and found that the recently added feature with cacheGroups
> > > doing
> > > > > > pretty much the same that was described in this issue. CacheGroup
> > > > > > guarantees
> > > > > > that all caches within a group have same assignments since they
> > > share a
> > > > > > single underlying 'physical' cache.
> > > > > >
> > > > >
> > > > > > I think we can return FairAffinityFunction and add information to
> > its
> > > > > > Javadoc that all caches with same AffinityFunction and NodeFilter
> > > > should
> > > > > be
> > > > > > combined in cache group to avoid a problem with inconsistent
> > previous
> > > > > > assignments.
> > > > > >
> > > > > > What do you guys think?
> > > > > >
> > > > >
> > > > > Are you suggesting that we can only reuse the same
> > FairAffinityFunction
> > > > > across the logical caches within the same group? This would mean
> that
> > > > > caches from the different groups cannot participate in JOINs or
> > > > collocated
> > > > > compute.
> > > > >
> > > > > I think I like the idea, however, we need to make sure that we
> > enforce
> > > > this
> > > > > restriction, at least at the SQL JOIN level.
> > > > >
> > > > > Alexey G, Val, would be nice to hear your thoughts on this.
> > > > >
> > > > >
> > > > > >
> > > > > > Evgenii
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > View this message in context: http://apache-ignite-
> > > > > > developers.2346864.n4.nabble.com/Resurrect-FairAffinityFunction-
> > > > > > tp19987p20669.html
> > > > > > Sent from the Apache Ignite Developers mailing list archive at
> > > > > Nabble.com.
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to