Re: Best Effort Affinity for thin clients

2019-10-16 Thread Pavel Tupitsyn
 >> have
> > > >> > > > > concerns about the following:
> > > >> > > > >
> > > >> > > > > > On thin client startup it connects to all nodes provided
> by
> > > >> client
> > > >> > > > > configuration
> > > >> > > > >
> > > >> > > > > Should we, at least, make this behavior optional?
> > > >> > > > >
> > > >> > > > > One of the benefits of thin client is quick startup/connect
> > time
> > > >> and
> > > >> > > low
> > > >> > > > > resource usage.
> > > >> > > > > Adding "connect all" behavior can negate those benefits,
> > > >> especially
> > > >> > on
> > > >> > > > > large clusters.
> > > >> > > > >
> > > >> > > > > Thoughts?
> > > >> > > > >
> > > >> > > > > On Thu, Feb 14, 2019 at 5:39 PM Igor Sapego <
> > isap...@apache.org
> > > >
> > > >> > > wrote:
> > > >> > > > >
> > > >> > > > > > Guys, I've updated the IEP page [1] once again.
> > > >> > > > > >
> > > >> > > > > > Please, pay attention to sections Cache affinity mapping
> > > >> acquiring
> > > >> > > > > > (4.a, format of Cache Partitions Request) and Changes to
> > cache
> > > >> > > > > > operations with single key (3 and 4, algorithm).
> > > >> > > > > >
> > > >> > > > > > Long story short, I've decided to add some additional data
> > to
> > > >> Cache
> > > >> > > > > > Partitions Response, so that client can determine how to
> > > >> calculate
> > > >> > > > > > partition for a given key properly.
> > > >> > > > > >
> > > >> > > > > > [1] -
> > > >> > > > > >
> > > >> > > > > >
> > > >> > > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > >> > > > > >
> > > >> > > > > > Best Regards,
> > > >> > > > > > Igor
> > > >> > > > > >
> > > >> > > > > >
> > > >> > > > > > On Mon, Feb 4, 2019 at 8:24 PM Pavel Tupitsyn <
> > > >> > ptupit...@apache.org>
> > > >> > > > > > wrote:
> > > >> > > > > >
> > > >> > > > > > > Looks good to me.
> > > >> > > > > > >
> > > >> > > > > > > On Mon, Feb 4, 2019 at 6:30 PM Igor Sapego <
> > > >> isap...@apache.org>
> > > >> > > > wrote:
> > > >> > > > > > >
> > > >> > > > > > > > I've updated IEP page: [1]
> > > >> > > > > > > >
> > > >> > > > > > > > What do you think now? To me it looks cleaner.
> > > >> > > > > > > >
> > > >> > > > > > > > [1] -
> > > >> > > > > > > >
> > > >> > > > > > > >
> > > >> > > > > > >
> > > >> > > > > >
> > > >> > > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > >> > > > > > > >
> > > >> > > > > > > > Best Regards,
> > > >> > > > > > > > Igor
> > > >> > > > > > > >
> > > >> > > > > > > >
> > > >> > > > > > >

Re: Best Effort Affinity for thin clients

2019-10-16 Thread Alex Plehanov
Hello guys,


I've implemented affinity awareness support for java thin client [1]. There
is only client-side affected by the patch. Can anyone review the change?


1: https://issues.apache.org/jira/browse/IGNITE-11898



ср, 13 мар. 2019 г. в 22:54, Pavel Tupitsyn :

> Default value for boolean is false, and I though we'll have the feature
> enabled by default.
> But I agree with you. Let's disable by default and name the config property
> EnableAffinityAwareness.
>
> On Wed, Mar 13, 2019 at 12:52 PM Igor Sapego  wrote:
>
> > For the "false" I mean "disable" here.
> >
> > BTW, I believe we should name this parameter in a positive way:
> > EnableAffinityAwareness, not disable.
> >
> > Best Regards,
> > Igor
> >
> >
> > On Wed, Mar 13, 2019 at 12:50 PM Igor Sapego  wrote:
> >
> > > Well, yes, this looks like a simplest solution. Let's implement it for
> > the
> > > beginning and set this feature to "false" by default, as this feature
> > looks
> > > complex, probably error-prone, and should be considered in a "beta"
> > > state for the first release.
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Mon, Mar 11, 2019 at 8:04 PM Pavel Tupitsyn 
> > > wrote:
> > >
> > >> My suggestion is a boolean flag in client configuration:
> > >> DisableAffinityAwareness
> > >> And use old random/round-robin behavior with only one active
> connection.
> > >>
> > >> On Mon, Mar 11, 2019 at 1:36 PM Igor Sapego 
> wrote:
> > >>
> > >> > Pavel,
> > >> >
> > >> > That's right. Do you have other suggestions or objections?
> > >> >
> > >> > Best Regards,
> > >> > Igor
> > >> >
> > >> >
> > >> > On Fri, Mar 8, 2019 at 11:37 AM Pavel Tupitsyn <
> ptupit...@apache.org>
> > >> > wrote:
> > >> >
> > >> > > >  maxConnectionNumber parameter
> > >> > > What's the idea? Follow the Best Effor Affinity logic, but
> establish
> > >> up
> > >> > to
> > >> > > N connections?
> > >> > >
> > >> > > On Thu, Mar 7, 2019 at 1:23 PM Igor Sapego 
> > >> wrote:
> > >> > >
> > >> > > > I can propose two improvements here:
> > >> > > >
> > >> > > > 1. A simple one. Lets introduce maxConnectionNumber parameter
> > >> > > > in ClientConfiguration. As it is easy to implement it may be
> > >> introduced
> > >> > > > together with the new feature to give user an additional
> control.
> > >> > > >
> > >> > > > 2. Asynchronous connection establishment. In this case startup
> > >> method
> > >> > > > of a client returns control to user once it have established at
> > >> least
> > >> > one
> > >> > > > connection. Other connections established in background by a
> > >> separate
> > >> > > > thread. This one is harder to implement and maybe it makes sense
> > to
> > >> add
> > >> > > > it as a separate feature.
> > >> > > >
> > >> > > > Best Regards,
> > >> > > > Igor
> > >> > > >
> > >> > > >
> > >> > > > On Wed, Mar 6, 2019 at 9:43 PM Pavel Tupitsyn <
> > ptupit...@apache.org
> > >> >
> > >> > > > wrote:
> > >> > > >
> > >> > > > > Hi,
> > >> > > > >
> > >> > > > > I'm in progress of implementing this IEP for Ignite.NET, and I
> > >> have
> > >> > > > > concerns about the following:
> > >> > > > >
> > >> > > > > > On thin client startup it connects to all nodes provided by
> > >> client
> > >> > > > > configuration
> > >> > > > >
> > >> > > > > Should we, at least, make this behavior optional?
> > >> > > > >
> > >> > > > > One of the benefits of thin client is quick startup/connect
> time
> > >> and
> > >> > > low
> > >> > > > > resource usage.
> > >>

Re: Best Effort Affinity for thin clients

2019-03-13 Thread Pavel Tupitsyn
Default value for boolean is false, and I though we'll have the feature
enabled by default.
But I agree with you. Let's disable by default and name the config property
EnableAffinityAwareness.

On Wed, Mar 13, 2019 at 12:52 PM Igor Sapego  wrote:

> For the "false" I mean "disable" here.
>
> BTW, I believe we should name this parameter in a positive way:
> EnableAffinityAwareness, not disable.
>
> Best Regards,
> Igor
>
>
> On Wed, Mar 13, 2019 at 12:50 PM Igor Sapego  wrote:
>
> > Well, yes, this looks like a simplest solution. Let's implement it for
> the
> > beginning and set this feature to "false" by default, as this feature
> looks
> > complex, probably error-prone, and should be considered in a "beta"
> > state for the first release.
> >
> > Best Regards,
> > Igor
> >
> >
> > On Mon, Mar 11, 2019 at 8:04 PM Pavel Tupitsyn 
> > wrote:
> >
> >> My suggestion is a boolean flag in client configuration:
> >> DisableAffinityAwareness
> >> And use old random/round-robin behavior with only one active connection.
> >>
> >> On Mon, Mar 11, 2019 at 1:36 PM Igor Sapego  wrote:
> >>
> >> > Pavel,
> >> >
> >> > That's right. Do you have other suggestions or objections?
> >> >
> >> > Best Regards,
> >> > Igor
> >> >
> >> >
> >> > On Fri, Mar 8, 2019 at 11:37 AM Pavel Tupitsyn 
> >> > wrote:
> >> >
> >> > > >  maxConnectionNumber parameter
> >> > > What's the idea? Follow the Best Effor Affinity logic, but establish
> >> up
> >> > to
> >> > > N connections?
> >> > >
> >> > > On Thu, Mar 7, 2019 at 1:23 PM Igor Sapego 
> >> wrote:
> >> > >
> >> > > > I can propose two improvements here:
> >> > > >
> >> > > > 1. A simple one. Lets introduce maxConnectionNumber parameter
> >> > > > in ClientConfiguration. As it is easy to implement it may be
> >> introduced
> >> > > > together with the new feature to give user an additional control.
> >> > > >
> >> > > > 2. Asynchronous connection establishment. In this case startup
> >> method
> >> > > > of a client returns control to user once it have established at
> >> least
> >> > one
> >> > > > connection. Other connections established in background by a
> >> separate
> >> > > > thread. This one is harder to implement and maybe it makes sense
> to
> >> add
> >> > > > it as a separate feature.
> >> > > >
> >> > > > Best Regards,
> >> > > > Igor
> >> > > >
> >> > > >
> >> > > > On Wed, Mar 6, 2019 at 9:43 PM Pavel Tupitsyn <
> ptupit...@apache.org
> >> >
> >> > > > wrote:
> >> > > >
> >> > > > > Hi,
> >> > > > >
> >> > > > > I'm in progress of implementing this IEP for Ignite.NET, and I
> >> have
> >> > > > > concerns about the following:
> >> > > > >
> >> > > > > > On thin client startup it connects to all nodes provided by
> >> client
> >> > > > > configuration
> >> > > > >
> >> > > > > Should we, at least, make this behavior optional?
> >> > > > >
> >> > > > > One of the benefits of thin client is quick startup/connect time
> >> and
> >> > > low
> >> > > > > resource usage.
> >> > > > > Adding "connect all" behavior can negate those benefits,
> >> especially
> >> > on
> >> > > > > large clusters.
> >> > > > >
> >> > > > > Thoughts?
> >> > > > >
> >> > > > > On Thu, Feb 14, 2019 at 5:39 PM Igor Sapego  >
> >> > > wrote:
> >> > > > >
> >> > > > > > Guys, I've updated the IEP page [1] once again.
> >> > > > > >
> >> > > > > > Please, pay attention to sections Cache affinity mapping
> >> acquiring
> >> > > > > > (4.a, format of Cache Partitions Request) and Changes to cache
> >> > > > > > operations with s

Re: Best Effort Affinity for thin clients

2019-03-13 Thread Igor Sapego
For the "false" I mean "disable" here.

BTW, I believe we should name this parameter in a positive way:
EnableAffinityAwareness, not disable.

Best Regards,
Igor


On Wed, Mar 13, 2019 at 12:50 PM Igor Sapego  wrote:

> Well, yes, this looks like a simplest solution. Let's implement it for the
> beginning and set this feature to "false" by default, as this feature looks
> complex, probably error-prone, and should be considered in a "beta"
> state for the first release.
>
> Best Regards,
> Igor
>
>
> On Mon, Mar 11, 2019 at 8:04 PM Pavel Tupitsyn 
> wrote:
>
>> My suggestion is a boolean flag in client configuration:
>> DisableAffinityAwareness
>> And use old random/round-robin behavior with only one active connection.
>>
>> On Mon, Mar 11, 2019 at 1:36 PM Igor Sapego  wrote:
>>
>> > Pavel,
>> >
>> > That's right. Do you have other suggestions or objections?
>> >
>> > Best Regards,
>> > Igor
>> >
>> >
>> > On Fri, Mar 8, 2019 at 11:37 AM Pavel Tupitsyn 
>> > wrote:
>> >
>> > > >  maxConnectionNumber parameter
>> > > What's the idea? Follow the Best Effor Affinity logic, but establish
>> up
>> > to
>> > > N connections?
>> > >
>> > > On Thu, Mar 7, 2019 at 1:23 PM Igor Sapego 
>> wrote:
>> > >
>> > > > I can propose two improvements here:
>> > > >
>> > > > 1. A simple one. Lets introduce maxConnectionNumber parameter
>> > > > in ClientConfiguration. As it is easy to implement it may be
>> introduced
>> > > > together with the new feature to give user an additional control.
>> > > >
>> > > > 2. Asynchronous connection establishment. In this case startup
>> method
>> > > > of a client returns control to user once it have established at
>> least
>> > one
>> > > > connection. Other connections established in background by a
>> separate
>> > > > thread. This one is harder to implement and maybe it makes sense to
>> add
>> > > > it as a separate feature.
>> > > >
>> > > > Best Regards,
>> > > > Igor
>> > > >
>> > > >
>> > > > On Wed, Mar 6, 2019 at 9:43 PM Pavel Tupitsyn > >
>> > > > wrote:
>> > > >
>> > > > > Hi,
>> > > > >
>> > > > > I'm in progress of implementing this IEP for Ignite.NET, and I
>> have
>> > > > > concerns about the following:
>> > > > >
>> > > > > > On thin client startup it connects to all nodes provided by
>> client
>> > > > > configuration
>> > > > >
>> > > > > Should we, at least, make this behavior optional?
>> > > > >
>> > > > > One of the benefits of thin client is quick startup/connect time
>> and
>> > > low
>> > > > > resource usage.
>> > > > > Adding "connect all" behavior can negate those benefits,
>> especially
>> > on
>> > > > > large clusters.
>> > > > >
>> > > > > Thoughts?
>> > > > >
>> > > > > On Thu, Feb 14, 2019 at 5:39 PM Igor Sapego 
>> > > wrote:
>> > > > >
>> > > > > > Guys, I've updated the IEP page [1] once again.
>> > > > > >
>> > > > > > Please, pay attention to sections Cache affinity mapping
>> acquiring
>> > > > > > (4.a, format of Cache Partitions Request) and Changes to cache
>> > > > > > operations with single key (3 and 4, algorithm).
>> > > > > >
>> > > > > > Long story short, I've decided to add some additional data to
>> Cache
>> > > > > > Partitions Response, so that client can determine how to
>> calculate
>> > > > > > partition for a given key properly.
>> > > > > >
>> > > > > > [1] -
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
>> > > > > >
>> > > > > > Best Regards,
>> > > > > > Ig

Re: Best Effort Affinity for thin clients

2019-03-13 Thread Igor Sapego
Well, yes, this looks like a simplest solution. Let's implement it for the
beginning and set this feature to "false" by default, as this feature looks
complex, probably error-prone, and should be considered in a "beta"
state for the first release.

Best Regards,
Igor


On Mon, Mar 11, 2019 at 8:04 PM Pavel Tupitsyn  wrote:

> My suggestion is a boolean flag in client configuration:
> DisableAffinityAwareness
> And use old random/round-robin behavior with only one active connection.
>
> On Mon, Mar 11, 2019 at 1:36 PM Igor Sapego  wrote:
>
> > Pavel,
> >
> > That's right. Do you have other suggestions or objections?
> >
> > Best Regards,
> > Igor
> >
> >
> > On Fri, Mar 8, 2019 at 11:37 AM Pavel Tupitsyn 
> > wrote:
> >
> > > >  maxConnectionNumber parameter
> > > What's the idea? Follow the Best Effor Affinity logic, but establish up
> > to
> > > N connections?
> > >
> > > On Thu, Mar 7, 2019 at 1:23 PM Igor Sapego  wrote:
> > >
> > > > I can propose two improvements here:
> > > >
> > > > 1. A simple one. Lets introduce maxConnectionNumber parameter
> > > > in ClientConfiguration. As it is easy to implement it may be
> introduced
> > > > together with the new feature to give user an additional control.
> > > >
> > > > 2. Asynchronous connection establishment. In this case startup method
> > > > of a client returns control to user once it have established at least
> > one
> > > > connection. Other connections established in background by a separate
> > > > thread. This one is harder to implement and maybe it makes sense to
> add
> > > > it as a separate feature.
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Wed, Mar 6, 2019 at 9:43 PM Pavel Tupitsyn 
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I'm in progress of implementing this IEP for Ignite.NET, and I have
> > > > > concerns about the following:
> > > > >
> > > > > > On thin client startup it connects to all nodes provided by
> client
> > > > > configuration
> > > > >
> > > > > Should we, at least, make this behavior optional?
> > > > >
> > > > > One of the benefits of thin client is quick startup/connect time
> and
> > > low
> > > > > resource usage.
> > > > > Adding "connect all" behavior can negate those benefits, especially
> > on
> > > > > large clusters.
> > > > >
> > > > > Thoughts?
> > > > >
> > > > > On Thu, Feb 14, 2019 at 5:39 PM Igor Sapego 
> > > wrote:
> > > > >
> > > > > > Guys, I've updated the IEP page [1] once again.
> > > > > >
> > > > > > Please, pay attention to sections Cache affinity mapping
> acquiring
> > > > > > (4.a, format of Cache Partitions Request) and Changes to cache
> > > > > > operations with single key (3 and 4, algorithm).
> > > > > >
> > > > > > Long story short, I've decided to add some additional data to
> Cache
> > > > > > Partitions Response, so that client can determine how to
> calculate
> > > > > > partition for a given key properly.
> > > > > >
> > > > > > [1] -
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > > > >
> > > > > > Best Regards,
> > > > > > Igor
> > > > > >
> > > > > >
> > > > > > On Mon, Feb 4, 2019 at 8:24 PM Pavel Tupitsyn <
> > ptupit...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > Looks good to me.
> > > > > > >
> > > > > > > On Mon, Feb 4, 2019 at 6:30 PM Igor Sapego  >
> > > > wrote:
> > > > > > >
> > > > > > > > I've updated IEP page: [1]
> > > > > > > >
> > > > > > > > What do you think now? To me it looks cleaner.
> > > > > > > >
> > >

Re: Best Effort Affinity for thin clients

2019-03-11 Thread Pavel Tupitsyn
My suggestion is a boolean flag in client configuration:
DisableAffinityAwareness
And use old random/round-robin behavior with only one active connection.

On Mon, Mar 11, 2019 at 1:36 PM Igor Sapego  wrote:

> Pavel,
>
> That's right. Do you have other suggestions or objections?
>
> Best Regards,
> Igor
>
>
> On Fri, Mar 8, 2019 at 11:37 AM Pavel Tupitsyn 
> wrote:
>
> > >  maxConnectionNumber parameter
> > What's the idea? Follow the Best Effor Affinity logic, but establish up
> to
> > N connections?
> >
> > On Thu, Mar 7, 2019 at 1:23 PM Igor Sapego  wrote:
> >
> > > I can propose two improvements here:
> > >
> > > 1. A simple one. Lets introduce maxConnectionNumber parameter
> > > in ClientConfiguration. As it is easy to implement it may be introduced
> > > together with the new feature to give user an additional control.
> > >
> > > 2. Asynchronous connection establishment. In this case startup method
> > > of a client returns control to user once it have established at least
> one
> > > connection. Other connections established in background by a separate
> > > thread. This one is harder to implement and maybe it makes sense to add
> > > it as a separate feature.
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Wed, Mar 6, 2019 at 9:43 PM Pavel Tupitsyn 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I'm in progress of implementing this IEP for Ignite.NET, and I have
> > > > concerns about the following:
> > > >
> > > > > On thin client startup it connects to all nodes provided by client
> > > > configuration
> > > >
> > > > Should we, at least, make this behavior optional?
> > > >
> > > > One of the benefits of thin client is quick startup/connect time and
> > low
> > > > resource usage.
> > > > Adding "connect all" behavior can negate those benefits, especially
> on
> > > > large clusters.
> > > >
> > > > Thoughts?
> > > >
> > > > On Thu, Feb 14, 2019 at 5:39 PM Igor Sapego 
> > wrote:
> > > >
> > > > > Guys, I've updated the IEP page [1] once again.
> > > > >
> > > > > Please, pay attention to sections Cache affinity mapping acquiring
> > > > > (4.a, format of Cache Partitions Request) and Changes to cache
> > > > > operations with single key (3 and 4, algorithm).
> > > > >
> > > > > Long story short, I've decided to add some additional data to Cache
> > > > > Partitions Response, so that client can determine how to calculate
> > > > > partition for a given key properly.
> > > > >
> > > > > [1] -
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > > >
> > > > > Best Regards,
> > > > > Igor
> > > > >
> > > > >
> > > > > On Mon, Feb 4, 2019 at 8:24 PM Pavel Tupitsyn <
> ptupit...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Looks good to me.
> > > > > >
> > > > > > On Mon, Feb 4, 2019 at 6:30 PM Igor Sapego 
> > > wrote:
> > > > > >
> > > > > > > I've updated IEP page: [1]
> > > > > > >
> > > > > > > What do you think now? To me it looks cleaner.
> > > > > > >
> > > > > > > [1] -
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > > > > >
> > > > > > > Best Regards,
> > > > > > > Igor
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Feb 4, 2019 at 4:44 PM Igor Sapego  >
> > > > wrote:
> > > > > > >
> > > > > > > > Ok, I understand now. I'll try updating IEP according to this
> > > > > proposal
> > > > > > > and
> > > > > > > > notify you guys.
> > > > > > > >
> > 

Re: Best Effort Affinity for thin clients

2019-03-11 Thread Igor Sapego
Pavel,

That's right. Do you have other suggestions or objections?

Best Regards,
Igor


On Fri, Mar 8, 2019 at 11:37 AM Pavel Tupitsyn  wrote:

> >  maxConnectionNumber parameter
> What's the idea? Follow the Best Effor Affinity logic, but establish up to
> N connections?
>
> On Thu, Mar 7, 2019 at 1:23 PM Igor Sapego  wrote:
>
> > I can propose two improvements here:
> >
> > 1. A simple one. Lets introduce maxConnectionNumber parameter
> > in ClientConfiguration. As it is easy to implement it may be introduced
> > together with the new feature to give user an additional control.
> >
> > 2. Asynchronous connection establishment. In this case startup method
> > of a client returns control to user once it have established at least one
> > connection. Other connections established in background by a separate
> > thread. This one is harder to implement and maybe it makes sense to add
> > it as a separate feature.
> >
> > Best Regards,
> > Igor
> >
> >
> > On Wed, Mar 6, 2019 at 9:43 PM Pavel Tupitsyn 
> > wrote:
> >
> > > Hi,
> > >
> > > I'm in progress of implementing this IEP for Ignite.NET, and I have
> > > concerns about the following:
> > >
> > > > On thin client startup it connects to all nodes provided by client
> > > configuration
> > >
> > > Should we, at least, make this behavior optional?
> > >
> > > One of the benefits of thin client is quick startup/connect time and
> low
> > > resource usage.
> > > Adding "connect all" behavior can negate those benefits, especially on
> > > large clusters.
> > >
> > > Thoughts?
> > >
> > > On Thu, Feb 14, 2019 at 5:39 PM Igor Sapego 
> wrote:
> > >
> > > > Guys, I've updated the IEP page [1] once again.
> > > >
> > > > Please, pay attention to sections Cache affinity mapping acquiring
> > > > (4.a, format of Cache Partitions Request) and Changes to cache
> > > > operations with single key (3 and 4, algorithm).
> > > >
> > > > Long story short, I've decided to add some additional data to Cache
> > > > Partitions Response, so that client can determine how to calculate
> > > > partition for a given key properly.
> > > >
> > > > [1] -
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Mon, Feb 4, 2019 at 8:24 PM Pavel Tupitsyn 
> > > > wrote:
> > > >
> > > > > Looks good to me.
> > > > >
> > > > > On Mon, Feb 4, 2019 at 6:30 PM Igor Sapego 
> > wrote:
> > > > >
> > > > > > I've updated IEP page: [1]
> > > > > >
> > > > > > What do you think now? To me it looks cleaner.
> > > > > >
> > > > > > [1] -
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > > > >
> > > > > > Best Regards,
> > > > > > Igor
> > > > > >
> > > > > >
> > > > > > On Mon, Feb 4, 2019 at 4:44 PM Igor Sapego 
> > > wrote:
> > > > > >
> > > > > > > Ok, I understand now. I'll try updating IEP according to this
> > > > proposal
> > > > > > and
> > > > > > > notify you guys.
> > > > > > >
> > > > > > > Best Regards,
> > > > > > > Igor
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Feb 4, 2019 at 4:27 PM Vladimir Ozerov <
> > > voze...@gridgain.com
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Igor,
> > > > > > >>
> > > > > > >> My idea is simply to add the list of caches with the same
> > > > distribution
> > > > > > to
> > > > > > >> the end of partition response. Client can use this information
> > to
> > > > > > populate
> > > > > > >> partition info fo

Re: Best Effort Affinity for thin clients

2019-03-08 Thread Pavel Tupitsyn
>  maxConnectionNumber parameter
What's the idea? Follow the Best Effor Affinity logic, but establish up to
N connections?

On Thu, Mar 7, 2019 at 1:23 PM Igor Sapego  wrote:

> I can propose two improvements here:
>
> 1. A simple one. Lets introduce maxConnectionNumber parameter
> in ClientConfiguration. As it is easy to implement it may be introduced
> together with the new feature to give user an additional control.
>
> 2. Asynchronous connection establishment. In this case startup method
> of a client returns control to user once it have established at least one
> connection. Other connections established in background by a separate
> thread. This one is harder to implement and maybe it makes sense to add
> it as a separate feature.
>
> Best Regards,
> Igor
>
>
> On Wed, Mar 6, 2019 at 9:43 PM Pavel Tupitsyn 
> wrote:
>
> > Hi,
> >
> > I'm in progress of implementing this IEP for Ignite.NET, and I have
> > concerns about the following:
> >
> > > On thin client startup it connects to all nodes provided by client
> > configuration
> >
> > Should we, at least, make this behavior optional?
> >
> > One of the benefits of thin client is quick startup/connect time and low
> > resource usage.
> > Adding "connect all" behavior can negate those benefits, especially on
> > large clusters.
> >
> > Thoughts?
> >
> > On Thu, Feb 14, 2019 at 5:39 PM Igor Sapego  wrote:
> >
> > > Guys, I've updated the IEP page [1] once again.
> > >
> > > Please, pay attention to sections Cache affinity mapping acquiring
> > > (4.a, format of Cache Partitions Request) and Changes to cache
> > > operations with single key (3 and 4, algorithm).
> > >
> > > Long story short, I've decided to add some additional data to Cache
> > > Partitions Response, so that client can determine how to calculate
> > > partition for a given key properly.
> > >
> > > [1] -
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Mon, Feb 4, 2019 at 8:24 PM Pavel Tupitsyn 
> > > wrote:
> > >
> > > > Looks good to me.
> > > >
> > > > On Mon, Feb 4, 2019 at 6:30 PM Igor Sapego 
> wrote:
> > > >
> > > > > I've updated IEP page: [1]
> > > > >
> > > > > What do you think now? To me it looks cleaner.
> > > > >
> > > > > [1] -
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > > >
> > > > > Best Regards,
> > > > > Igor
> > > > >
> > > > >
> > > > > On Mon, Feb 4, 2019 at 4:44 PM Igor Sapego 
> > wrote:
> > > > >
> > > > > > Ok, I understand now. I'll try updating IEP according to this
> > > proposal
> > > > > and
> > > > > > notify you guys.
> > > > > >
> > > > > > Best Regards,
> > > > > > Igor
> > > > > >
> > > > > >
> > > > > > On Mon, Feb 4, 2019 at 4:27 PM Vladimir Ozerov <
> > voze...@gridgain.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > >> Igor,
> > > > > >>
> > > > > >> My idea is simply to add the list of caches with the same
> > > distribution
> > > > > to
> > > > > >> the end of partition response. Client can use this information
> to
> > > > > populate
> > > > > >> partition info for more caches in a single request.
> > > > > >>
> > > > > >> On Mon, Feb 4, 2019 at 3:06 PM Igor Sapego 
> > > > wrote:
> > > > > >>
> > > > > >> > Vladimir,
> > > > > >> >
> > > > > >> > So correct me if I'm wrong, what you propose is to avoid
> > > mentioning
> > > > > >> > of cache groups, and use instead of "cache group" term
> something
> > > > like
> > > > > >> > "distribution"? Or do you propose some changes in protocol? If
> > so,
> > > > can
> > 

Re: Best Effort Affinity for thin clients

2019-03-07 Thread Igor Sapego
I can propose two improvements here:

1. A simple one. Lets introduce maxConnectionNumber parameter
in ClientConfiguration. As it is easy to implement it may be introduced
together with the new feature to give user an additional control.

2. Asynchronous connection establishment. In this case startup method
of a client returns control to user once it have established at least one
connection. Other connections established in background by a separate
thread. This one is harder to implement and maybe it makes sense to add
it as a separate feature.

Best Regards,
Igor


On Wed, Mar 6, 2019 at 9:43 PM Pavel Tupitsyn  wrote:

> Hi,
>
> I'm in progress of implementing this IEP for Ignite.NET, and I have
> concerns about the following:
>
> > On thin client startup it connects to all nodes provided by client
> configuration
>
> Should we, at least, make this behavior optional?
>
> One of the benefits of thin client is quick startup/connect time and low
> resource usage.
> Adding "connect all" behavior can negate those benefits, especially on
> large clusters.
>
> Thoughts?
>
> On Thu, Feb 14, 2019 at 5:39 PM Igor Sapego  wrote:
>
> > Guys, I've updated the IEP page [1] once again.
> >
> > Please, pay attention to sections Cache affinity mapping acquiring
> > (4.a, format of Cache Partitions Request) and Changes to cache
> > operations with single key (3 and 4, algorithm).
> >
> > Long story short, I've decided to add some additional data to Cache
> > Partitions Response, so that client can determine how to calculate
> > partition for a given key properly.
> >
> > [1] -
> >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> >
> > Best Regards,
> > Igor
> >
> >
> > On Mon, Feb 4, 2019 at 8:24 PM Pavel Tupitsyn 
> > wrote:
> >
> > > Looks good to me.
> > >
> > > On Mon, Feb 4, 2019 at 6:30 PM Igor Sapego  wrote:
> > >
> > > > I've updated IEP page: [1]
> > > >
> > > > What do you think now? To me it looks cleaner.
> > > >
> > > > [1] -
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Mon, Feb 4, 2019 at 4:44 PM Igor Sapego 
> wrote:
> > > >
> > > > > Ok, I understand now. I'll try updating IEP according to this
> > proposal
> > > > and
> > > > > notify you guys.
> > > > >
> > > > > Best Regards,
> > > > > Igor
> > > > >
> > > > >
> > > > > On Mon, Feb 4, 2019 at 4:27 PM Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > > wrote:
> > > > >
> > > > >> Igor,
> > > > >>
> > > > >> My idea is simply to add the list of caches with the same
> > distribution
> > > > to
> > > > >> the end of partition response. Client can use this information to
> > > > populate
> > > > >> partition info for more caches in a single request.
> > > > >>
> > > > >> On Mon, Feb 4, 2019 at 3:06 PM Igor Sapego 
> > > wrote:
> > > > >>
> > > > >> > Vladimir,
> > > > >> >
> > > > >> > So correct me if I'm wrong, what you propose is to avoid
> > mentioning
> > > > >> > of cache groups, and use instead of "cache group" term something
> > > like
> > > > >> > "distribution"? Or do you propose some changes in protocol? If
> so,
> > > can
> > > > >> > you briefly explain, what kind of changes they are?
> > > > >> >
> > > > >> > Best Regards,
> > > > >> > Igor
> > > > >> >
> > > > >> >
> > > > >> > On Mon, Feb 4, 2019 at 1:13 PM Vladimir Ozerov <
> > > voze...@gridgain.com>
> > > > >> > wrote:
> > > > >> >
> > > > >> > > Igor,
> > > > >> > >
> > > > >> > > Yes, cache groups are public API. However, we try to avoid new
> > > APIs
> > > > >> > > depending on them.
> > > > >> > > The main point from 

Re: Best Effort Affinity for thin clients

2019-03-06 Thread Pavel Tupitsyn
Hi,

I'm in progress of implementing this IEP for Ignite.NET, and I have
concerns about the following:

> On thin client startup it connects to all nodes provided by client
configuration

Should we, at least, make this behavior optional?

One of the benefits of thin client is quick startup/connect time and low
resource usage.
Adding "connect all" behavior can negate those benefits, especially on
large clusters.

Thoughts?

On Thu, Feb 14, 2019 at 5:39 PM Igor Sapego  wrote:

> Guys, I've updated the IEP page [1] once again.
>
> Please, pay attention to sections Cache affinity mapping acquiring
> (4.a, format of Cache Partitions Request) and Changes to cache
> operations with single key (3 and 4, algorithm).
>
> Long story short, I've decided to add some additional data to Cache
> Partitions Response, so that client can determine how to calculate
> partition for a given key properly.
>
> [1] -
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
>
> Best Regards,
> Igor
>
>
> On Mon, Feb 4, 2019 at 8:24 PM Pavel Tupitsyn 
> wrote:
>
> > Looks good to me.
> >
> > On Mon, Feb 4, 2019 at 6:30 PM Igor Sapego  wrote:
> >
> > > I've updated IEP page: [1]
> > >
> > > What do you think now? To me it looks cleaner.
> > >
> > > [1] -
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Mon, Feb 4, 2019 at 4:44 PM Igor Sapego  wrote:
> > >
> > > > Ok, I understand now. I'll try updating IEP according to this
> proposal
> > > and
> > > > notify you guys.
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Mon, Feb 4, 2019 at 4:27 PM Vladimir Ozerov  >
> > > > wrote:
> > > >
> > > >> Igor,
> > > >>
> > > >> My idea is simply to add the list of caches with the same
> distribution
> > > to
> > > >> the end of partition response. Client can use this information to
> > > populate
> > > >> partition info for more caches in a single request.
> > > >>
> > > >> On Mon, Feb 4, 2019 at 3:06 PM Igor Sapego 
> > wrote:
> > > >>
> > > >> > Vladimir,
> > > >> >
> > > >> > So correct me if I'm wrong, what you propose is to avoid
> mentioning
> > > >> > of cache groups, and use instead of "cache group" term something
> > like
> > > >> > "distribution"? Or do you propose some changes in protocol? If so,
> > can
> > > >> > you briefly explain, what kind of changes they are?
> > > >> >
> > > >> > Best Regards,
> > > >> > Igor
> > > >> >
> > > >> >
> > > >> > On Mon, Feb 4, 2019 at 1:13 PM Vladimir Ozerov <
> > voze...@gridgain.com>
> > > >> > wrote:
> > > >> >
> > > >> > > Igor,
> > > >> > >
> > > >> > > Yes, cache groups are public API. However, we try to avoid new
> > APIs
> > > >> > > depending on them.
> > > >> > > The main point from my side is that “similar cache group” can be
> > > >> easily
> > > >> > > generalized to “similar distribution”. This way we avoid cache
> > > groups
> > > >> on
> > > >> > > protocol level at virtually no cost.
> > > >> > >
> > > >> > > Vladimir.
> > > >> > >
> > > >> > > пн, 4 февр. 2019 г. в 12:48, Igor Sapego :
> > > >> > >
> > > >> > > > Guys,
> > > >> > > >
> > > >> > > > Can you explain why do we want to avoid Cache groups in
> > protocol?
> > > >> > > >
> > > >> > > > If it's about simplicity of the protocol, then removing cache
> > > groups
> > > >> > will
> > > >> > > > not help much with it - we will still need to include
> > > >> "knownCacheIds"
> > > >> > > > field in request and "cachesWithTheSamePartitioning" field in
> > > >> response.
> > > >> 

Re: Best Effort Affinity for thin clients

2019-02-14 Thread Igor Sapego
Guys, I've updated the IEP page [1] once again.

Please, pay attention to sections Cache affinity mapping acquiring
(4.a, format of Cache Partitions Request) and Changes to cache
operations with single key (3 and 4, algorithm).

Long story short, I've decided to add some additional data to Cache
Partitions Response, so that client can determine how to calculate
partition for a given key properly.

[1] -
https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients

Best Regards,
Igor


On Mon, Feb 4, 2019 at 8:24 PM Pavel Tupitsyn  wrote:

> Looks good to me.
>
> On Mon, Feb 4, 2019 at 6:30 PM Igor Sapego  wrote:
>
> > I've updated IEP page: [1]
> >
> > What do you think now? To me it looks cleaner.
> >
> > [1] -
> >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> >
> > Best Regards,
> > Igor
> >
> >
> > On Mon, Feb 4, 2019 at 4:44 PM Igor Sapego  wrote:
> >
> > > Ok, I understand now. I'll try updating IEP according to this proposal
> > and
> > > notify you guys.
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Mon, Feb 4, 2019 at 4:27 PM Vladimir Ozerov 
> > > wrote:
> > >
> > >> Igor,
> > >>
> > >> My idea is simply to add the list of caches with the same distribution
> > to
> > >> the end of partition response. Client can use this information to
> > populate
> > >> partition info for more caches in a single request.
> > >>
> > >> On Mon, Feb 4, 2019 at 3:06 PM Igor Sapego 
> wrote:
> > >>
> > >> > Vladimir,
> > >> >
> > >> > So correct me if I'm wrong, what you propose is to avoid mentioning
> > >> > of cache groups, and use instead of "cache group" term something
> like
> > >> > "distribution"? Or do you propose some changes in protocol? If so,
> can
> > >> > you briefly explain, what kind of changes they are?
> > >> >
> > >> > Best Regards,
> > >> > Igor
> > >> >
> > >> >
> > >> > On Mon, Feb 4, 2019 at 1:13 PM Vladimir Ozerov <
> voze...@gridgain.com>
> > >> > wrote:
> > >> >
> > >> > > Igor,
> > >> > >
> > >> > > Yes, cache groups are public API. However, we try to avoid new
> APIs
> > >> > > depending on them.
> > >> > > The main point from my side is that “similar cache group” can be
> > >> easily
> > >> > > generalized to “similar distribution”. This way we avoid cache
> > groups
> > >> on
> > >> > > protocol level at virtually no cost.
> > >> > >
> > >> > > Vladimir.
> > >> > >
> > >> > > пн, 4 февр. 2019 г. в 12:48, Igor Sapego :
> > >> > >
> > >> > > > Guys,
> > >> > > >
> > >> > > > Can you explain why do we want to avoid Cache groups in
> protocol?
> > >> > > >
> > >> > > > If it's about simplicity of the protocol, then removing cache
> > groups
> > >> > will
> > >> > > > not help much with it - we will still need to include
> > >> "knownCacheIds"
> > >> > > > field in request and "cachesWithTheSamePartitioning" field in
> > >> response.
> > >> > > > And also, since when do Ignite prefers simplicity over
> > performance?
> > >> > > >
> > >> > > > If it's about not wanting to show internals of Ignite then it
> > sounds
> > >> > like
> > >> > > > a very weak argument to me, since Cache Groups is a public thing
> > >> [1].
> > >> > > >
> > >> > > > [1] - https://apacheignite.readme.io/docs/cache-groups
> > >> > > >
> > >> > > > Best Regards,
> > >> > > > Igor
> > >> > > >
> > >> > > >
> > >> > > > On Mon, Feb 4, 2019 at 11:47 AM Vladimir Ozerov <
> > >> voze...@gridgain.com>
> > >> > > > wrote:
> > >> > > >
> > >> > > > > Pavel, Igor,
> > >> > > > >
> > >> > > >

Re: Best Effort Affinity for thin clients

2019-02-04 Thread Pavel Tupitsyn
Looks good to me.

On Mon, Feb 4, 2019 at 6:30 PM Igor Sapego  wrote:

> I've updated IEP page: [1]
>
> What do you think now? To me it looks cleaner.
>
> [1] -
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
>
> Best Regards,
> Igor
>
>
> On Mon, Feb 4, 2019 at 4:44 PM Igor Sapego  wrote:
>
> > Ok, I understand now. I'll try updating IEP according to this proposal
> and
> > notify you guys.
> >
> > Best Regards,
> > Igor
> >
> >
> > On Mon, Feb 4, 2019 at 4:27 PM Vladimir Ozerov 
> > wrote:
> >
> >> Igor,
> >>
> >> My idea is simply to add the list of caches with the same distribution
> to
> >> the end of partition response. Client can use this information to
> populate
> >> partition info for more caches in a single request.
> >>
> >> On Mon, Feb 4, 2019 at 3:06 PM Igor Sapego  wrote:
> >>
> >> > Vladimir,
> >> >
> >> > So correct me if I'm wrong, what you propose is to avoid mentioning
> >> > of cache groups, and use instead of "cache group" term something like
> >> > "distribution"? Or do you propose some changes in protocol? If so, can
> >> > you briefly explain, what kind of changes they are?
> >> >
> >> > Best Regards,
> >> > Igor
> >> >
> >> >
> >> > On Mon, Feb 4, 2019 at 1:13 PM Vladimir Ozerov 
> >> > wrote:
> >> >
> >> > > Igor,
> >> > >
> >> > > Yes, cache groups are public API. However, we try to avoid new APIs
> >> > > depending on them.
> >> > > The main point from my side is that “similar cache group” can be
> >> easily
> >> > > generalized to “similar distribution”. This way we avoid cache
> groups
> >> on
> >> > > protocol level at virtually no cost.
> >> > >
> >> > > Vladimir.
> >> > >
> >> > > пн, 4 февр. 2019 г. в 12:48, Igor Sapego :
> >> > >
> >> > > > Guys,
> >> > > >
> >> > > > Can you explain why do we want to avoid Cache groups in protocol?
> >> > > >
> >> > > > If it's about simplicity of the protocol, then removing cache
> groups
> >> > will
> >> > > > not help much with it - we will still need to include
> >> "knownCacheIds"
> >> > > > field in request and "cachesWithTheSamePartitioning" field in
> >> response.
> >> > > > And also, since when do Ignite prefers simplicity over
> performance?
> >> > > >
> >> > > > If it's about not wanting to show internals of Ignite then it
> sounds
> >> > like
> >> > > > a very weak argument to me, since Cache Groups is a public thing
> >> [1].
> >> > > >
> >> > > > [1] - https://apacheignite.readme.io/docs/cache-groups
> >> > > >
> >> > > > Best Regards,
> >> > > > Igor
> >> > > >
> >> > > >
> >> > > > On Mon, Feb 4, 2019 at 11:47 AM Vladimir Ozerov <
> >> voze...@gridgain.com>
> >> > > > wrote:
> >> > > >
> >> > > > > Pavel, Igor,
> >> > > > >
> >> > > > > This is not very accurate to say that this will not save memory.
> >> In
> >> > > > > practice we observed a number of OOME issues on the server-side
> >> due
> >> > to
> >> > > > many
> >> > > > > caches and it was one of motivations for cache groups (another
> one
> >> > disk
> >> > > > > access optimizations). On the other hand, I agree that we'd
> >> better to
> >> > > > avoid
> >> > > > > cache groups in the protocol because this is internal
> >> implementation
> >> > > > detail
> >> > > > > which is likely (I hope so) to be changed in future.
> >> > > > >
> >> > > > > So I have another proposal - let's track caches with the same
> >> > affinity
> >> > > > > distribution instead. That is, normally most of PARTITIONED
> caches
> >> > will
> >> > >

Re: Best Effort Affinity for thin clients

2019-02-04 Thread Igor Sapego
I've updated IEP page: [1]

What do you think now? To me it looks cleaner.

[1] -
https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients

Best Regards,
Igor


On Mon, Feb 4, 2019 at 4:44 PM Igor Sapego  wrote:

> Ok, I understand now. I'll try updating IEP according to this proposal and
> notify you guys.
>
> Best Regards,
> Igor
>
>
> On Mon, Feb 4, 2019 at 4:27 PM Vladimir Ozerov 
> wrote:
>
>> Igor,
>>
>> My idea is simply to add the list of caches with the same distribution to
>> the end of partition response. Client can use this information to populate
>> partition info for more caches in a single request.
>>
>> On Mon, Feb 4, 2019 at 3:06 PM Igor Sapego  wrote:
>>
>> > Vladimir,
>> >
>> > So correct me if I'm wrong, what you propose is to avoid mentioning
>> > of cache groups, and use instead of "cache group" term something like
>> > "distribution"? Or do you propose some changes in protocol? If so, can
>> > you briefly explain, what kind of changes they are?
>> >
>> > Best Regards,
>> > Igor
>> >
>> >
>> > On Mon, Feb 4, 2019 at 1:13 PM Vladimir Ozerov 
>> > wrote:
>> >
>> > > Igor,
>> > >
>> > > Yes, cache groups are public API. However, we try to avoid new APIs
>> > > depending on them.
>> > > The main point from my side is that “similar cache group” can be
>> easily
>> > > generalized to “similar distribution”. This way we avoid cache groups
>> on
>> > > protocol level at virtually no cost.
>> > >
>> > > Vladimir.
>> > >
>> > > пн, 4 февр. 2019 г. в 12:48, Igor Sapego :
>> > >
>> > > > Guys,
>> > > >
>> > > > Can you explain why do we want to avoid Cache groups in protocol?
>> > > >
>> > > > If it's about simplicity of the protocol, then removing cache groups
>> > will
>> > > > not help much with it - we will still need to include
>> "knownCacheIds"
>> > > > field in request and "cachesWithTheSamePartitioning" field in
>> response.
>> > > > And also, since when do Ignite prefers simplicity over performance?
>> > > >
>> > > > If it's about not wanting to show internals of Ignite then it sounds
>> > like
>> > > > a very weak argument to me, since Cache Groups is a public thing
>> [1].
>> > > >
>> > > > [1] - https://apacheignite.readme.io/docs/cache-groups
>> > > >
>> > > > Best Regards,
>> > > > Igor
>> > > >
>> > > >
>> > > > On Mon, Feb 4, 2019 at 11:47 AM Vladimir Ozerov <
>> voze...@gridgain.com>
>> > > > wrote:
>> > > >
>> > > > > Pavel, Igor,
>> > > > >
>> > > > > This is not very accurate to say that this will not save memory.
>> In
>> > > > > practice we observed a number of OOME issues on the server-side
>> due
>> > to
>> > > > many
>> > > > > caches and it was one of motivations for cache groups (another one
>> > disk
>> > > > > access optimizations). On the other hand, I agree that we'd
>> better to
>> > > > avoid
>> > > > > cache groups in the protocol because this is internal
>> implementation
>> > > > detail
>> > > > > which is likely (I hope so) to be changed in future.
>> > > > >
>> > > > > So I have another proposal - let's track caches with the same
>> > affinity
>> > > > > distribution instead. That is, normally most of PARTITIONED caches
>> > will
>> > > > > have very few variants of configuration: it will be Rendezvous
>> > affinity
>> > > > > function, most likely with default partition number and with 1-2
>> > > backups
>> > > > at
>> > > > > most. So when affinity distribution for specific cache is
>> requested,
>> > we
>> > > > can
>> > > > > append to the response *list of caches with the same
>> distribution*.
>> > > I.e.:
>> > > > >
>> > > > > class AffinityResponse {
>> > > > > Object distribution;// Actual distribution
>> > > > >

Re: Best Effort Affinity for thin clients

2019-02-04 Thread Igor Sapego
Ok, I understand now. I'll try updating IEP according to this proposal and
notify you guys.

Best Regards,
Igor


On Mon, Feb 4, 2019 at 4:27 PM Vladimir Ozerov  wrote:

> Igor,
>
> My idea is simply to add the list of caches with the same distribution to
> the end of partition response. Client can use this information to populate
> partition info for more caches in a single request.
>
> On Mon, Feb 4, 2019 at 3:06 PM Igor Sapego  wrote:
>
> > Vladimir,
> >
> > So correct me if I'm wrong, what you propose is to avoid mentioning
> > of cache groups, and use instead of "cache group" term something like
> > "distribution"? Or do you propose some changes in protocol? If so, can
> > you briefly explain, what kind of changes they are?
> >
> > Best Regards,
> > Igor
> >
> >
> > On Mon, Feb 4, 2019 at 1:13 PM Vladimir Ozerov 
> > wrote:
> >
> > > Igor,
> > >
> > > Yes, cache groups are public API. However, we try to avoid new APIs
> > > depending on them.
> > > The main point from my side is that “similar cache group” can be easily
> > > generalized to “similar distribution”. This way we avoid cache groups
> on
> > > protocol level at virtually no cost.
> > >
> > > Vladimir.
> > >
> > > пн, 4 февр. 2019 г. в 12:48, Igor Sapego :
> > >
> > > > Guys,
> > > >
> > > > Can you explain why do we want to avoid Cache groups in protocol?
> > > >
> > > > If it's about simplicity of the protocol, then removing cache groups
> > will
> > > > not help much with it - we will still need to include "knownCacheIds"
> > > > field in request and "cachesWithTheSamePartitioning" field in
> response.
> > > > And also, since when do Ignite prefers simplicity over performance?
> > > >
> > > > If it's about not wanting to show internals of Ignite then it sounds
> > like
> > > > a very weak argument to me, since Cache Groups is a public thing [1].
> > > >
> > > > [1] - https://apacheignite.readme.io/docs/cache-groups
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Mon, Feb 4, 2019 at 11:47 AM Vladimir Ozerov <
> voze...@gridgain.com>
> > > > wrote:
> > > >
> > > > > Pavel, Igor,
> > > > >
> > > > > This is not very accurate to say that this will not save memory. In
> > > > > practice we observed a number of OOME issues on the server-side due
> > to
> > > > many
> > > > > caches and it was one of motivations for cache groups (another one
> > disk
> > > > > access optimizations). On the other hand, I agree that we'd better
> to
> > > > avoid
> > > > > cache groups in the protocol because this is internal
> implementation
> > > > detail
> > > > > which is likely (I hope so) to be changed in future.
> > > > >
> > > > > So I have another proposal - let's track caches with the same
> > affinity
> > > > > distribution instead. That is, normally most of PARTITIONED caches
> > will
> > > > > have very few variants of configuration: it will be Rendezvous
> > affinity
> > > > > function, most likely with default partition number and with 1-2
> > > backups
> > > > at
> > > > > most. So when affinity distribution for specific cache is
> requested,
> > we
> > > > can
> > > > > append to the response *list of caches with the same distribution*.
> > > I.e.:
> > > > >
> > > > > class AffinityResponse {
> > > > > Object distribution;// Actual distribution
> > > > > List cacheIds; // Caches with similar distribution
> > > > > }
> > > > >
> > > > > Makes sense?
> > > > >
> > > > > On Sun, Feb 3, 2019 at 8:31 PM Pavel Tupitsyn <
> ptupit...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Igor, I have a feeling that we should omit Cache Group stuff from
> > the
> > > > > > protocol.
> > > > > > It is a rare use case and even then dealing with them on client
> > > barely
> > > > > > saves some memory.
> > > > > >
> > > > > > We can keep it simp

Re: Best Effort Affinity for thin clients

2019-02-04 Thread Vladimir Ozerov
Igor,

My idea is simply to add the list of caches with the same distribution to
the end of partition response. Client can use this information to populate
partition info for more caches in a single request.

On Mon, Feb 4, 2019 at 3:06 PM Igor Sapego  wrote:

> Vladimir,
>
> So correct me if I'm wrong, what you propose is to avoid mentioning
> of cache groups, and use instead of "cache group" term something like
> "distribution"? Or do you propose some changes in protocol? If so, can
> you briefly explain, what kind of changes they are?
>
> Best Regards,
> Igor
>
>
> On Mon, Feb 4, 2019 at 1:13 PM Vladimir Ozerov 
> wrote:
>
> > Igor,
> >
> > Yes, cache groups are public API. However, we try to avoid new APIs
> > depending on them.
> > The main point from my side is that “similar cache group” can be easily
> > generalized to “similar distribution”. This way we avoid cache groups on
> > protocol level at virtually no cost.
> >
> > Vladimir.
> >
> > пн, 4 февр. 2019 г. в 12:48, Igor Sapego :
> >
> > > Guys,
> > >
> > > Can you explain why do we want to avoid Cache groups in protocol?
> > >
> > > If it's about simplicity of the protocol, then removing cache groups
> will
> > > not help much with it - we will still need to include "knownCacheIds"
> > > field in request and "cachesWithTheSamePartitioning" field in response.
> > > And also, since when do Ignite prefers simplicity over performance?
> > >
> > > If it's about not wanting to show internals of Ignite then it sounds
> like
> > > a very weak argument to me, since Cache Groups is a public thing [1].
> > >
> > > [1] - https://apacheignite.readme.io/docs/cache-groups
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Mon, Feb 4, 2019 at 11:47 AM Vladimir Ozerov 
> > > wrote:
> > >
> > > > Pavel, Igor,
> > > >
> > > > This is not very accurate to say that this will not save memory. In
> > > > practice we observed a number of OOME issues on the server-side due
> to
> > > many
> > > > caches and it was one of motivations for cache groups (another one
> disk
> > > > access optimizations). On the other hand, I agree that we'd better to
> > > avoid
> > > > cache groups in the protocol because this is internal implementation
> > > detail
> > > > which is likely (I hope so) to be changed in future.
> > > >
> > > > So I have another proposal - let's track caches with the same
> affinity
> > > > distribution instead. That is, normally most of PARTITIONED caches
> will
> > > > have very few variants of configuration: it will be Rendezvous
> affinity
> > > > function, most likely with default partition number and with 1-2
> > backups
> > > at
> > > > most. So when affinity distribution for specific cache is requested,
> we
> > > can
> > > > append to the response *list of caches with the same distribution*.
> > I.e.:
> > > >
> > > > class AffinityResponse {
> > > > Object distribution;// Actual distribution
> > > > List cacheIds; // Caches with similar distribution
> > > > }
> > > >
> > > > Makes sense?
> > > >
> > > > On Sun, Feb 3, 2019 at 8:31 PM Pavel Tupitsyn 
> > > > wrote:
> > > >
> > > > > Igor, I have a feeling that we should omit Cache Group stuff from
> the
> > > > > protocol.
> > > > > It is a rare use case and even then dealing with them on client
> > barely
> > > > > saves some memory.
> > > > >
> > > > > We can keep it simple and have partition map per cacheId. Thoughts?
> > > > >
> > > > > On Fri, Feb 1, 2019 at 6:49 PM Igor Sapego 
> > wrote:
> > > > >
> > > > > > Guys, I've updated the proposal once again [1], so please,
> > > > > > take a look and let me know what you think.
> > > > > >
> > > > > > [1] -
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > > > >
> > > > > > Best Regards,
> > > > > > Igor
> > > > > >
> > > &g

Re: Best Effort Affinity for thin clients

2019-02-04 Thread Igor Sapego
Vladimir,

So correct me if I'm wrong, what you propose is to avoid mentioning
of cache groups, and use instead of "cache group" term something like
"distribution"? Or do you propose some changes in protocol? If so, can
you briefly explain, what kind of changes they are?

Best Regards,
Igor


On Mon, Feb 4, 2019 at 1:13 PM Vladimir Ozerov  wrote:

> Igor,
>
> Yes, cache groups are public API. However, we try to avoid new APIs
> depending on them.
> The main point from my side is that “similar cache group” can be easily
> generalized to “similar distribution”. This way we avoid cache groups on
> protocol level at virtually no cost.
>
> Vladimir.
>
> пн, 4 февр. 2019 г. в 12:48, Igor Sapego :
>
> > Guys,
> >
> > Can you explain why do we want to avoid Cache groups in protocol?
> >
> > If it's about simplicity of the protocol, then removing cache groups will
> > not help much with it - we will still need to include "knownCacheIds"
> > field in request and "cachesWithTheSamePartitioning" field in response.
> > And also, since when do Ignite prefers simplicity over performance?
> >
> > If it's about not wanting to show internals of Ignite then it sounds like
> > a very weak argument to me, since Cache Groups is a public thing [1].
> >
> > [1] - https://apacheignite.readme.io/docs/cache-groups
> >
> > Best Regards,
> > Igor
> >
> >
> > On Mon, Feb 4, 2019 at 11:47 AM Vladimir Ozerov 
> > wrote:
> >
> > > Pavel, Igor,
> > >
> > > This is not very accurate to say that this will not save memory. In
> > > practice we observed a number of OOME issues on the server-side due to
> > many
> > > caches and it was one of motivations for cache groups (another one disk
> > > access optimizations). On the other hand, I agree that we'd better to
> > avoid
> > > cache groups in the protocol because this is internal implementation
> > detail
> > > which is likely (I hope so) to be changed in future.
> > >
> > > So I have another proposal - let's track caches with the same affinity
> > > distribution instead. That is, normally most of PARTITIONED caches will
> > > have very few variants of configuration: it will be Rendezvous affinity
> > > function, most likely with default partition number and with 1-2
> backups
> > at
> > > most. So when affinity distribution for specific cache is requested, we
> > can
> > > append to the response *list of caches with the same distribution*.
> I.e.:
> > >
> > > class AffinityResponse {
> > > Object distribution;// Actual distribution
> > > List cacheIds; // Caches with similar distribution
> > > }
> > >
> > > Makes sense?
> > >
> > > On Sun, Feb 3, 2019 at 8:31 PM Pavel Tupitsyn 
> > > wrote:
> > >
> > > > Igor, I have a feeling that we should omit Cache Group stuff from the
> > > > protocol.
> > > > It is a rare use case and even then dealing with them on client
> barely
> > > > saves some memory.
> > > >
> > > > We can keep it simple and have partition map per cacheId. Thoughts?
> > > >
> > > > On Fri, Feb 1, 2019 at 6:49 PM Igor Sapego 
> wrote:
> > > >
> > > > > Guys, I've updated the proposal once again [1], so please,
> > > > > take a look and let me know what you think.
> > > > >
> > > > > [1] -
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > > >
> > > > > Best Regards,
> > > > > Igor
> > > > >
> > > > >
> > > > > On Thu, Jan 17, 2019 at 1:05 PM Igor Sapego 
> > > wrote:
> > > > >
> > > > > > Yeah, I'll add it.
> > > > > >
> > > > > > Best Regards,
> > > > > > Igor
> > > > > >
> > > > > >
> > > > > > On Wed, Jan 16, 2019 at 11:08 PM Pavel Tupitsyn <
> > > ptupit...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > >> >  to every server
> > > > > >> I did not think of this issue. Now I agree with your approach.
> > > > > >> Can you please add an explanation of this to the IEP?
> > > > > >>
> > &g

Re: Best Effort Affinity for thin clients

2019-02-04 Thread Vladimir Ozerov
Igor,

Yes, cache groups are public API. However, we try to avoid new APIs
depending on them.
The main point from my side is that “similar cache group” can be easily
generalized to “similar distribution”. This way we avoid cache groups on
protocol level at virtually no cost.

Vladimir.

пн, 4 февр. 2019 г. в 12:48, Igor Sapego :

> Guys,
>
> Can you explain why do we want to avoid Cache groups in protocol?
>
> If it's about simplicity of the protocol, then removing cache groups will
> not help much with it - we will still need to include "knownCacheIds"
> field in request and "cachesWithTheSamePartitioning" field in response.
> And also, since when do Ignite prefers simplicity over performance?
>
> If it's about not wanting to show internals of Ignite then it sounds like
> a very weak argument to me, since Cache Groups is a public thing [1].
>
> [1] - https://apacheignite.readme.io/docs/cache-groups
>
> Best Regards,
> Igor
>
>
> On Mon, Feb 4, 2019 at 11:47 AM Vladimir Ozerov 
> wrote:
>
> > Pavel, Igor,
> >
> > This is not very accurate to say that this will not save memory. In
> > practice we observed a number of OOME issues on the server-side due to
> many
> > caches and it was one of motivations for cache groups (another one disk
> > access optimizations). On the other hand, I agree that we'd better to
> avoid
> > cache groups in the protocol because this is internal implementation
> detail
> > which is likely (I hope so) to be changed in future.
> >
> > So I have another proposal - let's track caches with the same affinity
> > distribution instead. That is, normally most of PARTITIONED caches will
> > have very few variants of configuration: it will be Rendezvous affinity
> > function, most likely with default partition number and with 1-2 backups
> at
> > most. So when affinity distribution for specific cache is requested, we
> can
> > append to the response *list of caches with the same distribution*. I.e.:
> >
> > class AffinityResponse {
> > Object distribution;// Actual distribution
> > List cacheIds; // Caches with similar distribution
> > }
> >
> > Makes sense?
> >
> > On Sun, Feb 3, 2019 at 8:31 PM Pavel Tupitsyn 
> > wrote:
> >
> > > Igor, I have a feeling that we should omit Cache Group stuff from the
> > > protocol.
> > > It is a rare use case and even then dealing with them on client barely
> > > saves some memory.
> > >
> > > We can keep it simple and have partition map per cacheId. Thoughts?
> > >
> > > On Fri, Feb 1, 2019 at 6:49 PM Igor Sapego  wrote:
> > >
> > > > Guys, I've updated the proposal once again [1], so please,
> > > > take a look and let me know what you think.
> > > >
> > > > [1] -
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Thu, Jan 17, 2019 at 1:05 PM Igor Sapego 
> > wrote:
> > > >
> > > > > Yeah, I'll add it.
> > > > >
> > > > > Best Regards,
> > > > > Igor
> > > > >
> > > > >
> > > > > On Wed, Jan 16, 2019 at 11:08 PM Pavel Tupitsyn <
> > ptupit...@apache.org>
> > > > > wrote:
> > > > >
> > > > >> >  to every server
> > > > >> I did not think of this issue. Now I agree with your approach.
> > > > >> Can you please add an explanation of this to the IEP?
> > > > >>
> > > > >> Thanks!
> > > > >>
> > > > >> On Wed, Jan 16, 2019 at 2:53 PM Igor Sapego 
> > > wrote:
> > > > >>
> > > > >> > Pavel,
> > > > >> >
> > > > >> > Yeah, it makes sense, but to me it seems that this approach can
> > lead
> > > > >> > to more complicated client logic, as it will require to make
> > > > additional
> > > > >> > call
> > > > >> > to every server, that reports affinity topology change.
> > > > >> >
> > > > >> > Guys, WDYT?
> > > > >> >
> > > > >> > Best Regards,
> > > > >> > Igor
> > > > >> >
> > > > >> >
> > > > >&

Re: Best Effort Affinity for thin clients

2019-02-04 Thread Igor Sapego
Guys,

Can you explain why do we want to avoid Cache groups in protocol?

If it's about simplicity of the protocol, then removing cache groups will
not help much with it - we will still need to include "knownCacheIds"
field in request and "cachesWithTheSamePartitioning" field in response.
And also, since when do Ignite prefers simplicity over performance?

If it's about not wanting to show internals of Ignite then it sounds like
a very weak argument to me, since Cache Groups is a public thing [1].

[1] - https://apacheignite.readme.io/docs/cache-groups

Best Regards,
Igor


On Mon, Feb 4, 2019 at 11:47 AM Vladimir Ozerov 
wrote:

> Pavel, Igor,
>
> This is not very accurate to say that this will not save memory. In
> practice we observed a number of OOME issues on the server-side due to many
> caches and it was one of motivations for cache groups (another one disk
> access optimizations). On the other hand, I agree that we'd better to avoid
> cache groups in the protocol because this is internal implementation detail
> which is likely (I hope so) to be changed in future.
>
> So I have another proposal - let's track caches with the same affinity
> distribution instead. That is, normally most of PARTITIONED caches will
> have very few variants of configuration: it will be Rendezvous affinity
> function, most likely with default partition number and with 1-2 backups at
> most. So when affinity distribution for specific cache is requested, we can
> append to the response *list of caches with the same distribution*. I.e.:
>
> class AffinityResponse {
> Object distribution;// Actual distribution
> List cacheIds; // Caches with similar distribution
> }
>
> Makes sense?
>
> On Sun, Feb 3, 2019 at 8:31 PM Pavel Tupitsyn 
> wrote:
>
> > Igor, I have a feeling that we should omit Cache Group stuff from the
> > protocol.
> > It is a rare use case and even then dealing with them on client barely
> > saves some memory.
> >
> > We can keep it simple and have partition map per cacheId. Thoughts?
> >
> > On Fri, Feb 1, 2019 at 6:49 PM Igor Sapego  wrote:
> >
> > > Guys, I've updated the proposal once again [1], so please,
> > > take a look and let me know what you think.
> > >
> > > [1] -
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Thu, Jan 17, 2019 at 1:05 PM Igor Sapego 
> wrote:
> > >
> > > > Yeah, I'll add it.
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Wed, Jan 16, 2019 at 11:08 PM Pavel Tupitsyn <
> ptupit...@apache.org>
> > > > wrote:
> > > >
> > > >> >  to every server
> > > >> I did not think of this issue. Now I agree with your approach.
> > > >> Can you please add an explanation of this to the IEP?
> > > >>
> > > >> Thanks!
> > > >>
> > > >> On Wed, Jan 16, 2019 at 2:53 PM Igor Sapego 
> > wrote:
> > > >>
> > > >> > Pavel,
> > > >> >
> > > >> > Yeah, it makes sense, but to me it seems that this approach can
> lead
> > > >> > to more complicated client logic, as it will require to make
> > > additional
> > > >> > call
> > > >> > to every server, that reports affinity topology change.
> > > >> >
> > > >> > Guys, WDYT?
> > > >> >
> > > >> > Best Regards,
> > > >> > Igor
> > > >> >
> > > >> >
> > > >> > On Tue, Jan 15, 2019 at 10:59 PM Pavel Tupitsyn <
> > ptupit...@apache.org
> > > >
> > > >> > wrote:
> > > >> >
> > > >> > > Igor,
> > > >> > >
> > > >> > > >  It is proposed to add flag to every response, that shows
> > whether
> > > >> the
> > > >> > > Affinity Topology Version of the cluster has changed since the
> > last
> > > >> > request
> > > >> > > from the client.
> > > >> > > I propose to keep this flag. So no need for periodic checks.
> Makes
> > > >> sense?
> > > >> > >
> > > >> > > On Tue, Jan 15, 2019 at 4:45 PM Igor Sapego  >
> > > >> wrote:
&g

Re: Best Effort Affinity for thin clients

2019-02-04 Thread Vladimir Ozerov
Pavel, Igor,

This is not very accurate to say that this will not save memory. In
practice we observed a number of OOME issues on the server-side due to many
caches and it was one of motivations for cache groups (another one disk
access optimizations). On the other hand, I agree that we'd better to avoid
cache groups in the protocol because this is internal implementation detail
which is likely (I hope so) to be changed in future.

So I have another proposal - let's track caches with the same affinity
distribution instead. That is, normally most of PARTITIONED caches will
have very few variants of configuration: it will be Rendezvous affinity
function, most likely with default partition number and with 1-2 backups at
most. So when affinity distribution for specific cache is requested, we can
append to the response *list of caches with the same distribution*. I.e.:

class AffinityResponse {
Object distribution;// Actual distribution
List cacheIds; // Caches with similar distribution
}

Makes sense?

On Sun, Feb 3, 2019 at 8:31 PM Pavel Tupitsyn  wrote:

> Igor, I have a feeling that we should omit Cache Group stuff from the
> protocol.
> It is a rare use case and even then dealing with them on client barely
> saves some memory.
>
> We can keep it simple and have partition map per cacheId. Thoughts?
>
> On Fri, Feb 1, 2019 at 6:49 PM Igor Sapego  wrote:
>
> > Guys, I've updated the proposal once again [1], so please,
> > take a look and let me know what you think.
> >
> > [1] -
> >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> >
> > Best Regards,
> > Igor
> >
> >
> > On Thu, Jan 17, 2019 at 1:05 PM Igor Sapego  wrote:
> >
> > > Yeah, I'll add it.
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Wed, Jan 16, 2019 at 11:08 PM Pavel Tupitsyn 
> > > wrote:
> > >
> > >> >  to every server
> > >> I did not think of this issue. Now I agree with your approach.
> > >> Can you please add an explanation of this to the IEP?
> > >>
> > >> Thanks!
> > >>
> > >> On Wed, Jan 16, 2019 at 2:53 PM Igor Sapego 
> wrote:
> > >>
> > >> > Pavel,
> > >> >
> > >> > Yeah, it makes sense, but to me it seems that this approach can lead
> > >> > to more complicated client logic, as it will require to make
> > additional
> > >> > call
> > >> > to every server, that reports affinity topology change.
> > >> >
> > >> > Guys, WDYT?
> > >> >
> > >> > Best Regards,
> > >> > Igor
> > >> >
> > >> >
> > >> > On Tue, Jan 15, 2019 at 10:59 PM Pavel Tupitsyn <
> ptupit...@apache.org
> > >
> > >> > wrote:
> > >> >
> > >> > > Igor,
> > >> > >
> > >> > > >  It is proposed to add flag to every response, that shows
> whether
> > >> the
> > >> > > Affinity Topology Version of the cluster has changed since the
> last
> > >> > request
> > >> > > from the client.
> > >> > > I propose to keep this flag. So no need for periodic checks. Makes
> > >> sense?
> > >> > >
> > >> > > On Tue, Jan 15, 2019 at 4:45 PM Igor Sapego 
> > >> wrote:
> > >> > >
> > >> > > > Pavel,
> > >> > > >
> > >> > > > This will require from client to send this new request
> > periodically,
> > >> > I'm
> > >> > > > not
> > >> > > > sure this will make clients simpler. Anyway, let's discuss it.
> > >> > > >
> > >> > > > Vladimir,
> > >> > > >
> > >> > > > With current proposal, we will have affinity info in message
> > header.
> > >> > > >
> > >> > > > Best Regards,
> > >> > > > Igor
> > >> > > >
> > >> > > >
> > >> > > > On Tue, Jan 15, 2019 at 11:01 AM Vladimir Ozerov <
> > >> voze...@gridgain.com
> > >> > >
> > >> > > > wrote:
> > >> > > >
> > >> > > > > Igor,
> > >> > > > >
> > >> > > > > I think that "Cache Partitions Request" should contain
> affinit

Re: Best Effort Affinity for thin clients

2019-02-03 Thread Pavel Tupitsyn
Igor, I have a feeling that we should omit Cache Group stuff from the
protocol.
It is a rare use case and even then dealing with them on client barely
saves some memory.

We can keep it simple and have partition map per cacheId. Thoughts?

On Fri, Feb 1, 2019 at 6:49 PM Igor Sapego  wrote:

> Guys, I've updated the proposal once again [1], so please,
> take a look and let me know what you think.
>
> [1] -
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
>
> Best Regards,
> Igor
>
>
> On Thu, Jan 17, 2019 at 1:05 PM Igor Sapego  wrote:
>
> > Yeah, I'll add it.
> >
> > Best Regards,
> > Igor
> >
> >
> > On Wed, Jan 16, 2019 at 11:08 PM Pavel Tupitsyn 
> > wrote:
> >
> >> >  to every server
> >> I did not think of this issue. Now I agree with your approach.
> >> Can you please add an explanation of this to the IEP?
> >>
> >> Thanks!
> >>
> >> On Wed, Jan 16, 2019 at 2:53 PM Igor Sapego  wrote:
> >>
> >> > Pavel,
> >> >
> >> > Yeah, it makes sense, but to me it seems that this approach can lead
> >> > to more complicated client logic, as it will require to make
> additional
> >> > call
> >> > to every server, that reports affinity topology change.
> >> >
> >> > Guys, WDYT?
> >> >
> >> > Best Regards,
> >> > Igor
> >> >
> >> >
> >> > On Tue, Jan 15, 2019 at 10:59 PM Pavel Tupitsyn  >
> >> > wrote:
> >> >
> >> > > Igor,
> >> > >
> >> > > >  It is proposed to add flag to every response, that shows whether
> >> the
> >> > > Affinity Topology Version of the cluster has changed since the last
> >> > request
> >> > > from the client.
> >> > > I propose to keep this flag. So no need for periodic checks. Makes
> >> sense?
> >> > >
> >> > > On Tue, Jan 15, 2019 at 4:45 PM Igor Sapego 
> >> wrote:
> >> > >
> >> > > > Pavel,
> >> > > >
> >> > > > This will require from client to send this new request
> periodically,
> >> > I'm
> >> > > > not
> >> > > > sure this will make clients simpler. Anyway, let's discuss it.
> >> > > >
> >> > > > Vladimir,
> >> > > >
> >> > > > With current proposal, we will have affinity info in message
> header.
> >> > > >
> >> > > > Best Regards,
> >> > > > Igor
> >> > > >
> >> > > >
> >> > > > On Tue, Jan 15, 2019 at 11:01 AM Vladimir Ozerov <
> >> voze...@gridgain.com
> >> > >
> >> > > > wrote:
> >> > > >
> >> > > > > Igor,
> >> > > > >
> >> > > > > I think that "Cache Partitions Request" should contain affinity
> >> > > topology
> >> > > > > version. Otherwise we do not know what distribution is returned
> -
> >> the
> >> > > one
> >> > > > > we expected, or some newer one. The latter may happen in case
> >> > topology
> >> > > > > changed or late affinity assignment happened between server
> >> response
> >> > > and
> >> > > > > subsequent client partitions request.
> >> > > > >
> >> > > > > Vladimir.
> >> > > > >
> >> > > > > On Mon, Jan 14, 2019 at 6:08 PM Igor Sapego  >
> >> > > wrote:
> >> > > > >
> >> > > > > > Hello guys,
> >> > > > > >
> >> > > > > > I've updated IEP page [1] describing proposed solution in more
> >> > > details
> >> > > > > and
> >> > > > > > proposing some changes for a protocol.
> >> > > > > >
> >> > > > > > Please, take a look and let me know what you think.
> >> > > > > >
> >> > > > > > [1] -
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> >> > > > > >
> >> > > > > > Best Regards,
> >> > > > > > Igor
> >> > > > > >
> >> > > > > >
> >> > > > > > On Tue, Jun 19, 2018 at 11:54 AM Vladimir Ozerov <
> >> > > voze...@gridgain.com
> >> > > > >
> >> > > > > > wrote:
> >> > > > > >
> >> > > > > > > Denis,
> >> > > > > > >
> >> > > > > > > Yes, in principle we can extend it. We are going to
> implement
> >> it
> >> > in
> >> > > > > > > subsequent phases of this IEP.
> >> > > > > > >
> >> > > > > > > On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan <
> >> > > > > > dsetrak...@apache.org>
> >> > > > > > > wrote:
> >> > > > > > >
> >> > > > > > > > On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda <
> >> > dma...@apache.org
> >> > > >
> >> > > > > > wrote:
> >> > > > > > > >
> >> > > > > > > > > Folks,
> >> > > > > > > > >
> >> > > > > > > > > Feel that this functionality can be extended to the
> >> automatic
> >> > > > > > > reconnect,
> >> > > > > > > > > can't it? Presently we require to provide a static list
> of
> >> > IPs
> >> > > to
> >> > > > > be
> >> > > > > > > used
> >> > > > > > > > > at a reconnect time. By having a partition map of all
> the
> >> > > nodes,
> >> > > > > the
> >> > > > > > > thin
> >> > > > > > > > > client should be able to automate this piece.
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > > > Not sure if static IP list can be avoided. What Igor is
> >> > > suggesting
> >> > > > is
> >> > > > > > > that
> >> > > > > > > > we try to pick the best node out of the static IP  list.
> >> > > > > > > >
> >> > > > > > > > D.
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
>


Re: Best Effort Affinity for thin clients

2019-02-01 Thread Igor Sapego
Guys, I've updated the proposal once again [1], so please,
take a look and let me know what you think.

[1] -
https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients

Best Regards,
Igor


On Thu, Jan 17, 2019 at 1:05 PM Igor Sapego  wrote:

> Yeah, I'll add it.
>
> Best Regards,
> Igor
>
>
> On Wed, Jan 16, 2019 at 11:08 PM Pavel Tupitsyn 
> wrote:
>
>> >  to every server
>> I did not think of this issue. Now I agree with your approach.
>> Can you please add an explanation of this to the IEP?
>>
>> Thanks!
>>
>> On Wed, Jan 16, 2019 at 2:53 PM Igor Sapego  wrote:
>>
>> > Pavel,
>> >
>> > Yeah, it makes sense, but to me it seems that this approach can lead
>> > to more complicated client logic, as it will require to make additional
>> > call
>> > to every server, that reports affinity topology change.
>> >
>> > Guys, WDYT?
>> >
>> > Best Regards,
>> > Igor
>> >
>> >
>> > On Tue, Jan 15, 2019 at 10:59 PM Pavel Tupitsyn 
>> > wrote:
>> >
>> > > Igor,
>> > >
>> > > >  It is proposed to add flag to every response, that shows whether
>> the
>> > > Affinity Topology Version of the cluster has changed since the last
>> > request
>> > > from the client.
>> > > I propose to keep this flag. So no need for periodic checks. Makes
>> sense?
>> > >
>> > > On Tue, Jan 15, 2019 at 4:45 PM Igor Sapego 
>> wrote:
>> > >
>> > > > Pavel,
>> > > >
>> > > > This will require from client to send this new request periodically,
>> > I'm
>> > > > not
>> > > > sure this will make clients simpler. Anyway, let's discuss it.
>> > > >
>> > > > Vladimir,
>> > > >
>> > > > With current proposal, we will have affinity info in message header.
>> > > >
>> > > > Best Regards,
>> > > > Igor
>> > > >
>> > > >
>> > > > On Tue, Jan 15, 2019 at 11:01 AM Vladimir Ozerov <
>> voze...@gridgain.com
>> > >
>> > > > wrote:
>> > > >
>> > > > > Igor,
>> > > > >
>> > > > > I think that "Cache Partitions Request" should contain affinity
>> > > topology
>> > > > > version. Otherwise we do not know what distribution is returned -
>> the
>> > > one
>> > > > > we expected, or some newer one. The latter may happen in case
>> > topology
>> > > > > changed or late affinity assignment happened between server
>> response
>> > > and
>> > > > > subsequent client partitions request.
>> > > > >
>> > > > > Vladimir.
>> > > > >
>> > > > > On Mon, Jan 14, 2019 at 6:08 PM Igor Sapego 
>> > > wrote:
>> > > > >
>> > > > > > Hello guys,
>> > > > > >
>> > > > > > I've updated IEP page [1] describing proposed solution in more
>> > > details
>> > > > > and
>> > > > > > proposing some changes for a protocol.
>> > > > > >
>> > > > > > Please, take a look and let me know what you think.
>> > > > > >
>> > > > > > [1] -
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
>> > > > > >
>> > > > > > Best Regards,
>> > > > > > Igor
>> > > > > >
>> > > > > >
>> > > > > > On Tue, Jun 19, 2018 at 11:54 AM Vladimir Ozerov <
>> > > voze...@gridgain.com
>> > > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > > > Denis,
>> > > > > > >
>> > > > > > > Yes, in principle we can extend it. We are going to implement
>> it
>> > in
>> > > > > > > subsequent phases of this IEP.
>> > > > > > >
>> > > > > > > On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan <
>> > > > > > dsetrak...@apache.org>
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > > > On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda <
>> > dma...@apache.org
>> > > >
>> > > > > > wrote:
>> > > > > > > >
>> > > > > > > > > Folks,
>> > > > > > > > >
>> > > > > > > > > Feel that this functionality can be extended to the
>> automatic
>> > > > > > > reconnect,
>> > > > > > > > > can't it? Presently we require to provide a static list of
>> > IPs
>> > > to
>> > > > > be
>> > > > > > > used
>> > > > > > > > > at a reconnect time. By having a partition map of all the
>> > > nodes,
>> > > > > the
>> > > > > > > thin
>> > > > > > > > > client should be able to automate this piece.
>> > > > > > > > >
>> > > > > > > >
>> > > > > > > > Not sure if static IP list can be avoided. What Igor is
>> > > suggesting
>> > > > is
>> > > > > > > that
>> > > > > > > > we try to pick the best node out of the static IP  list.
>> > > > > > > >
>> > > > > > > > D.
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>


Re: Best Effort Affinity for thin clients

2019-01-17 Thread Igor Sapego
Yeah, I'll add it.

Best Regards,
Igor


On Wed, Jan 16, 2019 at 11:08 PM Pavel Tupitsyn 
wrote:

> >  to every server
> I did not think of this issue. Now I agree with your approach.
> Can you please add an explanation of this to the IEP?
>
> Thanks!
>
> On Wed, Jan 16, 2019 at 2:53 PM Igor Sapego  wrote:
>
> > Pavel,
> >
> > Yeah, it makes sense, but to me it seems that this approach can lead
> > to more complicated client logic, as it will require to make additional
> > call
> > to every server, that reports affinity topology change.
> >
> > Guys, WDYT?
> >
> > Best Regards,
> > Igor
> >
> >
> > On Tue, Jan 15, 2019 at 10:59 PM Pavel Tupitsyn 
> > wrote:
> >
> > > Igor,
> > >
> > > >  It is proposed to add flag to every response, that shows whether the
> > > Affinity Topology Version of the cluster has changed since the last
> > request
> > > from the client.
> > > I propose to keep this flag. So no need for periodic checks. Makes
> sense?
> > >
> > > On Tue, Jan 15, 2019 at 4:45 PM Igor Sapego 
> wrote:
> > >
> > > > Pavel,
> > > >
> > > > This will require from client to send this new request periodically,
> > I'm
> > > > not
> > > > sure this will make clients simpler. Anyway, let's discuss it.
> > > >
> > > > Vladimir,
> > > >
> > > > With current proposal, we will have affinity info in message header.
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Tue, Jan 15, 2019 at 11:01 AM Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > wrote:
> > > >
> > > > > Igor,
> > > > >
> > > > > I think that "Cache Partitions Request" should contain affinity
> > > topology
> > > > > version. Otherwise we do not know what distribution is returned -
> the
> > > one
> > > > > we expected, or some newer one. The latter may happen in case
> > topology
> > > > > changed or late affinity assignment happened between server
> response
> > > and
> > > > > subsequent client partitions request.
> > > > >
> > > > > Vladimir.
> > > > >
> > > > > On Mon, Jan 14, 2019 at 6:08 PM Igor Sapego 
> > > wrote:
> > > > >
> > > > > > Hello guys,
> > > > > >
> > > > > > I've updated IEP page [1] describing proposed solution in more
> > > details
> > > > > and
> > > > > > proposing some changes for a protocol.
> > > > > >
> > > > > > Please, take a look and let me know what you think.
> > > > > >
> > > > > > [1] -
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > > > >
> > > > > > Best Regards,
> > > > > > Igor
> > > > > >
> > > > > >
> > > > > > On Tue, Jun 19, 2018 at 11:54 AM Vladimir Ozerov <
> > > voze...@gridgain.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Denis,
> > > > > > >
> > > > > > > Yes, in principle we can extend it. We are going to implement
> it
> > in
> > > > > > > subsequent phases of this IEP.
> > > > > > >
> > > > > > > On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan <
> > > > > > dsetrak...@apache.org>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda <
> > dma...@apache.org
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Folks,
> > > > > > > > >
> > > > > > > > > Feel that this functionality can be extended to the
> automatic
> > > > > > > reconnect,
> > > > > > > > > can't it? Presently we require to provide a static list of
> > IPs
> > > to
> > > > > be
> > > > > > > used
> > > > > > > > > at a reconnect time. By having a partition map of all the
> > > nodes,
> > > > > the
> > > > > > > thin
> > > > > > > > > client should be able to automate this piece.
> > > > > > > > >
> > > > > > > >
> > > > > > > > Not sure if static IP list can be avoided. What Igor is
> > > suggesting
> > > > is
> > > > > > > that
> > > > > > > > we try to pick the best node out of the static IP  list.
> > > > > > > >
> > > > > > > > D.
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Best Effort Affinity for thin clients

2019-01-16 Thread Igor Sapego
Pavel,

Yeah, it makes sense, but to me it seems that this approach can lead
to more complicated client logic, as it will require to make additional call
to every server, that reports affinity topology change.

Guys, WDYT?

Best Regards,
Igor


On Tue, Jan 15, 2019 at 10:59 PM Pavel Tupitsyn 
wrote:

> Igor,
>
> >  It is proposed to add flag to every response, that shows whether the
> Affinity Topology Version of the cluster has changed since the last request
> from the client.
> I propose to keep this flag. So no need for periodic checks. Makes sense?
>
> On Tue, Jan 15, 2019 at 4:45 PM Igor Sapego  wrote:
>
> > Pavel,
> >
> > This will require from client to send this new request periodically, I'm
> > not
> > sure this will make clients simpler. Anyway, let's discuss it.
> >
> > Vladimir,
> >
> > With current proposal, we will have affinity info in message header.
> >
> > Best Regards,
> > Igor
> >
> >
> > On Tue, Jan 15, 2019 at 11:01 AM Vladimir Ozerov 
> > wrote:
> >
> > > Igor,
> > >
> > > I think that "Cache Partitions Request" should contain affinity
> topology
> > > version. Otherwise we do not know what distribution is returned - the
> one
> > > we expected, or some newer one. The latter may happen in case topology
> > > changed or late affinity assignment happened between server response
> and
> > > subsequent client partitions request.
> > >
> > > Vladimir.
> > >
> > > On Mon, Jan 14, 2019 at 6:08 PM Igor Sapego 
> wrote:
> > >
> > > > Hello guys,
> > > >
> > > > I've updated IEP page [1] describing proposed solution in more
> details
> > > and
> > > > proposing some changes for a protocol.
> > > >
> > > > Please, take a look and let me know what you think.
> > > >
> > > > [1] -
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Tue, Jun 19, 2018 at 11:54 AM Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > wrote:
> > > >
> > > > > Denis,
> > > > >
> > > > > Yes, in principle we can extend it. We are going to implement it in
> > > > > subsequent phases of this IEP.
> > > > >
> > > > > On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan <
> > > > dsetrak...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda  >
> > > > wrote:
> > > > > >
> > > > > > > Folks,
> > > > > > >
> > > > > > > Feel that this functionality can be extended to the automatic
> > > > > reconnect,
> > > > > > > can't it? Presently we require to provide a static list of IPs
> to
> > > be
> > > > > used
> > > > > > > at a reconnect time. By having a partition map of all the
> nodes,
> > > the
> > > > > thin
> > > > > > > client should be able to automate this piece.
> > > > > > >
> > > > > >
> > > > > > Not sure if static IP list can be avoided. What Igor is
> suggesting
> > is
> > > > > that
> > > > > > we try to pick the best node out of the static IP  list.
> > > > > >
> > > > > > D.
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Best Effort Affinity for thin clients

2019-01-15 Thread Pavel Tupitsyn
Igor,

>  It is proposed to add flag to every response, that shows whether the
Affinity Topology Version of the cluster has changed since the last request
from the client.
I propose to keep this flag. So no need for periodic checks. Makes sense?

On Tue, Jan 15, 2019 at 4:45 PM Igor Sapego  wrote:

> Pavel,
>
> This will require from client to send this new request periodically, I'm
> not
> sure this will make clients simpler. Anyway, let's discuss it.
>
> Vladimir,
>
> With current proposal, we will have affinity info in message header.
>
> Best Regards,
> Igor
>
>
> On Tue, Jan 15, 2019 at 11:01 AM Vladimir Ozerov 
> wrote:
>
> > Igor,
> >
> > I think that "Cache Partitions Request" should contain affinity topology
> > version. Otherwise we do not know what distribution is returned - the one
> > we expected, or some newer one. The latter may happen in case topology
> > changed or late affinity assignment happened between server response and
> > subsequent client partitions request.
> >
> > Vladimir.
> >
> > On Mon, Jan 14, 2019 at 6:08 PM Igor Sapego  wrote:
> >
> > > Hello guys,
> > >
> > > I've updated IEP page [1] describing proposed solution in more details
> > and
> > > proposing some changes for a protocol.
> > >
> > > Please, take a look and let me know what you think.
> > >
> > > [1] -
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Tue, Jun 19, 2018 at 11:54 AM Vladimir Ozerov  >
> > > wrote:
> > >
> > > > Denis,
> > > >
> > > > Yes, in principle we can extend it. We are going to implement it in
> > > > subsequent phases of this IEP.
> > > >
> > > > On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan <
> > > dsetrak...@apache.org>
> > > > wrote:
> > > >
> > > > > On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda 
> > > wrote:
> > > > >
> > > > > > Folks,
> > > > > >
> > > > > > Feel that this functionality can be extended to the automatic
> > > > reconnect,
> > > > > > can't it? Presently we require to provide a static list of IPs to
> > be
> > > > used
> > > > > > at a reconnect time. By having a partition map of all the nodes,
> > the
> > > > thin
> > > > > > client should be able to automate this piece.
> > > > > >
> > > > >
> > > > > Not sure if static IP list can be avoided. What Igor is suggesting
> is
> > > > that
> > > > > we try to pick the best node out of the static IP  list.
> > > > >
> > > > > D.
> > > > >
> > > >
> > >
> >
>


Re: Best Effort Affinity for thin clients

2019-01-15 Thread Igor Sapego
Pavel,

This will require from client to send this new request periodically, I'm not
sure this will make clients simpler. Anyway, let's discuss it.

Vladimir,

With current proposal, we will have affinity info in message header.

Best Regards,
Igor


On Tue, Jan 15, 2019 at 11:01 AM Vladimir Ozerov 
wrote:

> Igor,
>
> I think that "Cache Partitions Request" should contain affinity topology
> version. Otherwise we do not know what distribution is returned - the one
> we expected, or some newer one. The latter may happen in case topology
> changed or late affinity assignment happened between server response and
> subsequent client partitions request.
>
> Vladimir.
>
> On Mon, Jan 14, 2019 at 6:08 PM Igor Sapego  wrote:
>
> > Hello guys,
> >
> > I've updated IEP page [1] describing proposed solution in more details
> and
> > proposing some changes for a protocol.
> >
> > Please, take a look and let me know what you think.
> >
> > [1] -
> >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
> >
> > Best Regards,
> > Igor
> >
> >
> > On Tue, Jun 19, 2018 at 11:54 AM Vladimir Ozerov 
> > wrote:
> >
> > > Denis,
> > >
> > > Yes, in principle we can extend it. We are going to implement it in
> > > subsequent phases of this IEP.
> > >
> > > On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan <
> > dsetrak...@apache.org>
> > > wrote:
> > >
> > > > On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda 
> > wrote:
> > > >
> > > > > Folks,
> > > > >
> > > > > Feel that this functionality can be extended to the automatic
> > > reconnect,
> > > > > can't it? Presently we require to provide a static list of IPs to
> be
> > > used
> > > > > at a reconnect time. By having a partition map of all the nodes,
> the
> > > thin
> > > > > client should be able to automate this piece.
> > > > >
> > > >
> > > > Not sure if static IP list can be avoided. What Igor is suggesting is
> > > that
> > > > we try to pick the best node out of the static IP  list.
> > > >
> > > > D.
> > > >
> > >
> >
>


Re: Best Effort Affinity for thin clients

2019-01-15 Thread Vladimir Ozerov
Igor,

I think that "Cache Partitions Request" should contain affinity topology
version. Otherwise we do not know what distribution is returned - the one
we expected, or some newer one. The latter may happen in case topology
changed or late affinity assignment happened between server response and
subsequent client partitions request.

Vladimir.

On Mon, Jan 14, 2019 at 6:08 PM Igor Sapego  wrote:

> Hello guys,
>
> I've updated IEP page [1] describing proposed solution in more details and
> proposing some changes for a protocol.
>
> Please, take a look and let me know what you think.
>
> [1] -
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
>
> Best Regards,
> Igor
>
>
> On Tue, Jun 19, 2018 at 11:54 AM Vladimir Ozerov 
> wrote:
>
> > Denis,
> >
> > Yes, in principle we can extend it. We are going to implement it in
> > subsequent phases of this IEP.
> >
> > On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org>
> > wrote:
> >
> > > On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda 
> wrote:
> > >
> > > > Folks,
> > > >
> > > > Feel that this functionality can be extended to the automatic
> > reconnect,
> > > > can't it? Presently we require to provide a static list of IPs to be
> > used
> > > > at a reconnect time. By having a partition map of all the nodes, the
> > thin
> > > > client should be able to automate this piece.
> > > >
> > >
> > > Not sure if static IP list can be avoided. What Igor is suggesting is
> > that
> > > we try to pick the best node out of the static IP  list.
> > >
> > > D.
> > >
> >
>


Re: Best Effort Affinity for thin clients

2019-01-14 Thread Pavel Tupitsyn
Hi Igor,

Looks good to me in general, except changing the response message format so
much.

Can we use a separate message to retrieve affinity topology version?
Set a flag as you describe, but don't put the version data into standard
response?

Just to keep the protocol cleaner, follow SRP to some extent, and keep
client implementations simpler
(especially for clients that ignore this flag).

Thoughts?

On Mon, Jan 14, 2019 at 6:08 PM Igor Sapego  wrote:

> Hello guys,
>
> I've updated IEP page [1] describing proposed solution in more details and
> proposing some changes for a protocol.
>
> Please, take a look and let me know what you think.
>
> [1] -
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
>
> Best Regards,
> Igor
>
>
> On Tue, Jun 19, 2018 at 11:54 AM Vladimir Ozerov 
> wrote:
>
> > Denis,
> >
> > Yes, in principle we can extend it. We are going to implement it in
> > subsequent phases of this IEP.
> >
> > On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org>
> > wrote:
> >
> > > On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda 
> wrote:
> > >
> > > > Folks,
> > > >
> > > > Feel that this functionality can be extended to the automatic
> > reconnect,
> > > > can't it? Presently we require to provide a static list of IPs to be
> > used
> > > > at a reconnect time. By having a partition map of all the nodes, the
> > thin
> > > > client should be able to automate this piece.
> > > >
> > >
> > > Not sure if static IP list can be avoided. What Igor is suggesting is
> > that
> > > we try to pick the best node out of the static IP  list.
> > >
> > > D.
> > >
> >
>


Re: Best Effort Affinity for thin clients

2019-01-14 Thread Igor Sapego
Hello guys,

I've updated IEP page [1] describing proposed solution in more details and
proposing some changes for a protocol.

Please, take a look and let me know what you think.

[1] -
https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients

Best Regards,
Igor


On Tue, Jun 19, 2018 at 11:54 AM Vladimir Ozerov 
wrote:

> Denis,
>
> Yes, in principle we can extend it. We are going to implement it in
> subsequent phases of this IEP.
>
> On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan 
> wrote:
>
> > On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda  wrote:
> >
> > > Folks,
> > >
> > > Feel that this functionality can be extended to the automatic
> reconnect,
> > > can't it? Presently we require to provide a static list of IPs to be
> used
> > > at a reconnect time. By having a partition map of all the nodes, the
> thin
> > > client should be able to automate this piece.
> > >
> >
> > Not sure if static IP list can be avoided. What Igor is suggesting is
> that
> > we try to pick the best node out of the static IP  list.
> >
> > D.
> >
>


Re: Best Effort Affinity for thin clients

2018-06-19 Thread Vladimir Ozerov
Denis,

Yes, in principle we can extend it. We are going to implement it in
subsequent phases of this IEP.

On Tue, Jun 19, 2018 at 4:30 AM, Dmitriy Setrakyan 
wrote:

> On Mon, Jun 18, 2018 at 11:07 AM, Denis Magda  wrote:
>
> > Folks,
> >
> > Feel that this functionality can be extended to the automatic reconnect,
> > can't it? Presently we require to provide a static list of IPs to be used
> > at a reconnect time. By having a partition map of all the nodes, the thin
> > client should be able to automate this piece.
> >
>
> Not sure if static IP list can be avoided. What Igor is suggesting is that
> we try to pick the best node out of the static IP  list.
>
> D.
>


Re: Best Effort Affinity for thin clients

2018-06-18 Thread Igor Sapego
I've created an IEP: [1]

[1] -
https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients
Best Regards,
Igor


On Thu, Jun 14, 2018 at 4:17 PM Pavel Tupitsyn  wrote:

> Ok, I see, this is what I was trying to understand, and this is an
> important note I think:
>
> * We should request AffinityFunction for each particular cache and only
> enable this functionality for known functions
> * Make sure that known server-side functions never change their behavior
>
> Thanks
>
> On Thu, Jun 14, 2018 at 3:39 PM, Igor Sapego  wrote:
>
> > Vladimir is right,
> >
> > As far as I know, most users use affinity functions provided by Ignite.
> > So we could optimize for the default case and, in future, optionally,
> > let user implement their own AffinityFunction for thin clients.
> >
> > Best Regards,
> > Igor
> >
> >
> > On Thu, Jun 14, 2018 at 3:06 PM Vladimir Ozerov 
> > wrote:
> >
> > > Pavel,
> > >
> > > The idea here is that optimization will be applicable only for
> well-known
> > > affinity functions. E.g., we know that for rendezvous affinity,
> partition
> > > is "hash(key) % partitions". This is all we need to make default
> affinity
> > > work.
> > >
> > > On Thu, Jun 14, 2018 at 11:41 AM, Pavel Tupitsyn  >
> > > wrote:
> > >
> > > > AffinityFunction interface has the following method:
> > > > int partition(Object key)
> > > >
> > > > User calls cache.put(x,y) from the client.
> > > >
> > > > In order to calculate the target node we have to call that partition
> > > > method,
> > > > and then use partition map to get the node by partition.
> > > >
> > > > But client does not have AffinityFunction.
> > > > Where am I wrong here?
> > > >
> > > > On Thu, Jun 14, 2018 at 10:26 AM, Igor Sapego 
> > > > wrote:
> > > >
> > > > > Denis, that's right.
> > > > >
> > > > > Best Regards,
> > > > > Igor
> > > > >
> > > > >
> > > > > On Wed, Jun 13, 2018 at 10:58 PM Denis Magda 
> > > wrote:
> > > > >
> > > > > > Pavel,
> > > > > >
> > > > > > Most likely the client will be pulling the partitioning map
> > > > periodically.
> > > > > > If the local map is outdated, it won't be a big deal because a
> > server
> > > > > node
> > > > > > that receives a request:
> > > > > >
> > > > > >- can redirect it to a map that owns a partition
> > > > > >- will add an updated partition map to the client's response
> or
> > > will
> > > > > >turn a special flag in the response suggesting the client do
> > that
> > > > > > manually.
> > > > > >
> > > > > > Igor, is this what you're suggesting?
> > > > > >
> > > > > > --
> > > > > > Denis
> > > > > >
> > > > > > On Wed, Jun 13, 2018 at 11:31 AM Pavel Tupitsyn <
> > > ptupit...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Igor,
> > > > > > >
> > > > > > > How can we invoke the affinity function on the client, if we
> > don't
> > > > have
> > > > > > the
> > > > > > > implementation at hand?
> > > > > > > Am I missing something?
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Pavel
> > > > > > >
> > > > > > >
> > > > > > > On Wed, Jun 13, 2018 at 5:34 PM, Igor Sapego <
> isap...@apache.org
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi, Igniters,
> > > > > > > >
> > > > > > > > Currently, I'm working on the thin C++ client implementation.
> > > > > > > > As you may already know, there is an issue with latency in
> our
> > > > > > > > thin clients, which also can result in performance issues
> (you
> > > > > > > > can see the "About Ignite Thin client performance" thread on
> > > > > > > > use

Re: Best Effort Affinity for thin clients

2018-06-14 Thread Pavel Tupitsyn
Ok, I see, this is what I was trying to understand, and this is an
important note I think:

* We should request AffinityFunction for each particular cache and only
enable this functionality for known functions
* Make sure that known server-side functions never change their behavior

Thanks

On Thu, Jun 14, 2018 at 3:39 PM, Igor Sapego  wrote:

> Vladimir is right,
>
> As far as I know, most users use affinity functions provided by Ignite.
> So we could optimize for the default case and, in future, optionally,
> let user implement their own AffinityFunction for thin clients.
>
> Best Regards,
> Igor
>
>
> On Thu, Jun 14, 2018 at 3:06 PM Vladimir Ozerov 
> wrote:
>
> > Pavel,
> >
> > The idea here is that optimization will be applicable only for well-known
> > affinity functions. E.g., we know that for rendezvous affinity, partition
> > is "hash(key) % partitions". This is all we need to make default affinity
> > work.
> >
> > On Thu, Jun 14, 2018 at 11:41 AM, Pavel Tupitsyn 
> > wrote:
> >
> > > AffinityFunction interface has the following method:
> > > int partition(Object key)
> > >
> > > User calls cache.put(x,y) from the client.
> > >
> > > In order to calculate the target node we have to call that partition
> > > method,
> > > and then use partition map to get the node by partition.
> > >
> > > But client does not have AffinityFunction.
> > > Where am I wrong here?
> > >
> > > On Thu, Jun 14, 2018 at 10:26 AM, Igor Sapego 
> > > wrote:
> > >
> > > > Denis, that's right.
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > > >
> > > > On Wed, Jun 13, 2018 at 10:58 PM Denis Magda 
> > wrote:
> > > >
> > > > > Pavel,
> > > > >
> > > > > Most likely the client will be pulling the partitioning map
> > > periodically.
> > > > > If the local map is outdated, it won't be a big deal because a
> server
> > > > node
> > > > > that receives a request:
> > > > >
> > > > >- can redirect it to a map that owns a partition
> > > > >- will add an updated partition map to the client's response or
> > will
> > > > >turn a special flag in the response suggesting the client do
> that
> > > > > manually.
> > > > >
> > > > > Igor, is this what you're suggesting?
> > > > >
> > > > > --
> > > > > Denis
> > > > >
> > > > > On Wed, Jun 13, 2018 at 11:31 AM Pavel Tupitsyn <
> > ptupit...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Hi Igor,
> > > > > >
> > > > > > How can we invoke the affinity function on the client, if we
> don't
> > > have
> > > > > the
> > > > > > implementation at hand?
> > > > > > Am I missing something?
> > > > > >
> > > > > > Thanks,
> > > > > > Pavel
> > > > > >
> > > > > >
> > > > > > On Wed, Jun 13, 2018 at 5:34 PM, Igor Sapego  >
> > > > wrote:
> > > > > >
> > > > > > > Hi, Igniters,
> > > > > > >
> > > > > > > Currently, I'm working on the thin C++ client implementation.
> > > > > > > As you may already know, there is an issue with latency in our
> > > > > > > thin clients, which also can result in performance issues (you
> > > > > > > can see the "About Ignite Thin client performance" thread on
> > > > > > > user list).
> > > > > > >
> > > > > > > So, how about we implement some kind of "Best Effort Affinity"
> > > > > > > for our thin clients? In my opinion, this could be possible and
> > > > > > > may improve mean latency when using thin clients dramatically.
> > > > > > >
> > > > > > > The scenario is following:
> > > > > > > 1. Thin client connects to one of the node from the provided
> > > > > > > address list, just as now.
> > > > > > > 2. When user create instance of CacheClient, thin client
> > > > > > > requests partition mapping for the cache.
> > > > > > > 3. Client establishes connections to nodes, which are both in
> the
> > > > > > > list, provided by user and in a server node response.
> > > > > > > 4. When user makes put/get/some other cache operation,
> > > > > > > thin client makes the best effort to send the request to the
> > node,
> > > > > > > which stores the data.
> > > > > > > 5. To update partition mapping, thin client can provide public
> > API,
> > > > > > > or do it with some timeout. Also, we can add "miss" flag to
> cache
> > > > > > > operation response, which whill indicate, that operation was
> not
> > > > > > > local for the server node and which thin client can use to
> > > > > > > understand, that partition mapping has changed to request
> server
> > > > > > > node for an update.
> > > > > > >
> > > > > > > What do you think?
> > > > > > >
> > > > > > > Best Regards,
> > > > > > > Igor
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Best Effort Affinity for thin clients

2018-06-14 Thread Igor Sapego
Vladimir is right,

As far as I know, most users use affinity functions provided by Ignite.
So we could optimize for the default case and, in future, optionally,
let user implement their own AffinityFunction for thin clients.

Best Regards,
Igor


On Thu, Jun 14, 2018 at 3:06 PM Vladimir Ozerov 
wrote:

> Pavel,
>
> The idea here is that optimization will be applicable only for well-known
> affinity functions. E.g., we know that for rendezvous affinity, partition
> is "hash(key) % partitions". This is all we need to make default affinity
> work.
>
> On Thu, Jun 14, 2018 at 11:41 AM, Pavel Tupitsyn 
> wrote:
>
> > AffinityFunction interface has the following method:
> > int partition(Object key)
> >
> > User calls cache.put(x,y) from the client.
> >
> > In order to calculate the target node we have to call that partition
> > method,
> > and then use partition map to get the node by partition.
> >
> > But client does not have AffinityFunction.
> > Where am I wrong here?
> >
> > On Thu, Jun 14, 2018 at 10:26 AM, Igor Sapego 
> > wrote:
> >
> > > Denis, that's right.
> > >
> > > Best Regards,
> > > Igor
> > >
> > >
> > > On Wed, Jun 13, 2018 at 10:58 PM Denis Magda 
> wrote:
> > >
> > > > Pavel,
> > > >
> > > > Most likely the client will be pulling the partitioning map
> > periodically.
> > > > If the local map is outdated, it won't be a big deal because a server
> > > node
> > > > that receives a request:
> > > >
> > > >- can redirect it to a map that owns a partition
> > > >- will add an updated partition map to the client's response or
> will
> > > >turn a special flag in the response suggesting the client do that
> > > > manually.
> > > >
> > > > Igor, is this what you're suggesting?
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Wed, Jun 13, 2018 at 11:31 AM Pavel Tupitsyn <
> ptupit...@apache.org>
> > > > wrote:
> > > >
> > > > > Hi Igor,
> > > > >
> > > > > How can we invoke the affinity function on the client, if we don't
> > have
> > > > the
> > > > > implementation at hand?
> > > > > Am I missing something?
> > > > >
> > > > > Thanks,
> > > > > Pavel
> > > > >
> > > > >
> > > > > On Wed, Jun 13, 2018 at 5:34 PM, Igor Sapego 
> > > wrote:
> > > > >
> > > > > > Hi, Igniters,
> > > > > >
> > > > > > Currently, I'm working on the thin C++ client implementation.
> > > > > > As you may already know, there is an issue with latency in our
> > > > > > thin clients, which also can result in performance issues (you
> > > > > > can see the "About Ignite Thin client performance" thread on
> > > > > > user list).
> > > > > >
> > > > > > So, how about we implement some kind of "Best Effort Affinity"
> > > > > > for our thin clients? In my opinion, this could be possible and
> > > > > > may improve mean latency when using thin clients dramatically.
> > > > > >
> > > > > > The scenario is following:
> > > > > > 1. Thin client connects to one of the node from the provided
> > > > > > address list, just as now.
> > > > > > 2. When user create instance of CacheClient, thin client
> > > > > > requests partition mapping for the cache.
> > > > > > 3. Client establishes connections to nodes, which are both in the
> > > > > > list, provided by user and in a server node response.
> > > > > > 4. When user makes put/get/some other cache operation,
> > > > > > thin client makes the best effort to send the request to the
> node,
> > > > > > which stores the data.
> > > > > > 5. To update partition mapping, thin client can provide public
> API,
> > > > > > or do it with some timeout. Also, we can add "miss" flag to cache
> > > > > > operation response, which whill indicate, that operation was not
> > > > > > local for the server node and which thin client can use to
> > > > > > understand, that partition mapping has changed to request server
> > > > > > node for an update.
> > > > > >
> > > > > > What do you think?
> > > > > >
> > > > > > Best Regards,
> > > > > > Igor
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Best Effort Affinity for thin clients

2018-06-14 Thread Vladimir Ozerov
Pavel,

The idea here is that optimization will be applicable only for well-known
affinity functions. E.g., we know that for rendezvous affinity, partition
is "hash(key) % partitions". This is all we need to make default affinity
work.

On Thu, Jun 14, 2018 at 11:41 AM, Pavel Tupitsyn 
wrote:

> AffinityFunction interface has the following method:
> int partition(Object key)
>
> User calls cache.put(x,y) from the client.
>
> In order to calculate the target node we have to call that partition
> method,
> and then use partition map to get the node by partition.
>
> But client does not have AffinityFunction.
> Where am I wrong here?
>
> On Thu, Jun 14, 2018 at 10:26 AM, Igor Sapego 
> wrote:
>
> > Denis, that's right.
> >
> > Best Regards,
> > Igor
> >
> >
> > On Wed, Jun 13, 2018 at 10:58 PM Denis Magda  wrote:
> >
> > > Pavel,
> > >
> > > Most likely the client will be pulling the partitioning map
> periodically.
> > > If the local map is outdated, it won't be a big deal because a server
> > node
> > > that receives a request:
> > >
> > >- can redirect it to a map that owns a partition
> > >- will add an updated partition map to the client's response or will
> > >turn a special flag in the response suggesting the client do that
> > > manually.
> > >
> > > Igor, is this what you're suggesting?
> > >
> > > --
> > > Denis
> > >
> > > On Wed, Jun 13, 2018 at 11:31 AM Pavel Tupitsyn 
> > > wrote:
> > >
> > > > Hi Igor,
> > > >
> > > > How can we invoke the affinity function on the client, if we don't
> have
> > > the
> > > > implementation at hand?
> > > > Am I missing something?
> > > >
> > > > Thanks,
> > > > Pavel
> > > >
> > > >
> > > > On Wed, Jun 13, 2018 at 5:34 PM, Igor Sapego 
> > wrote:
> > > >
> > > > > Hi, Igniters,
> > > > >
> > > > > Currently, I'm working on the thin C++ client implementation.
> > > > > As you may already know, there is an issue with latency in our
> > > > > thin clients, which also can result in performance issues (you
> > > > > can see the "About Ignite Thin client performance" thread on
> > > > > user list).
> > > > >
> > > > > So, how about we implement some kind of "Best Effort Affinity"
> > > > > for our thin clients? In my opinion, this could be possible and
> > > > > may improve mean latency when using thin clients dramatically.
> > > > >
> > > > > The scenario is following:
> > > > > 1. Thin client connects to one of the node from the provided
> > > > > address list, just as now.
> > > > > 2. When user create instance of CacheClient, thin client
> > > > > requests partition mapping for the cache.
> > > > > 3. Client establishes connections to nodes, which are both in the
> > > > > list, provided by user and in a server node response.
> > > > > 4. When user makes put/get/some other cache operation,
> > > > > thin client makes the best effort to send the request to the node,
> > > > > which stores the data.
> > > > > 5. To update partition mapping, thin client can provide public API,
> > > > > or do it with some timeout. Also, we can add "miss" flag to cache
> > > > > operation response, which whill indicate, that operation was not
> > > > > local for the server node and which thin client can use to
> > > > > understand, that partition mapping has changed to request server
> > > > > node for an update.
> > > > >
> > > > > What do you think?
> > > > >
> > > > > Best Regards,
> > > > > Igor
> > > > >
> > > >
> > >
> >
>


Re: Best Effort Affinity for thin clients

2018-06-14 Thread Pavel Tupitsyn
AffinityFunction interface has the following method:
int partition(Object key)

User calls cache.put(x,y) from the client.

In order to calculate the target node we have to call that partition method,
and then use partition map to get the node by partition.

But client does not have AffinityFunction.
Where am I wrong here?

On Thu, Jun 14, 2018 at 10:26 AM, Igor Sapego  wrote:

> Denis, that's right.
>
> Best Regards,
> Igor
>
>
> On Wed, Jun 13, 2018 at 10:58 PM Denis Magda  wrote:
>
> > Pavel,
> >
> > Most likely the client will be pulling the partitioning map periodically.
> > If the local map is outdated, it won't be a big deal because a server
> node
> > that receives a request:
> >
> >- can redirect it to a map that owns a partition
> >- will add an updated partition map to the client's response or will
> >turn a special flag in the response suggesting the client do that
> > manually.
> >
> > Igor, is this what you're suggesting?
> >
> > --
> > Denis
> >
> > On Wed, Jun 13, 2018 at 11:31 AM Pavel Tupitsyn 
> > wrote:
> >
> > > Hi Igor,
> > >
> > > How can we invoke the affinity function on the client, if we don't have
> > the
> > > implementation at hand?
> > > Am I missing something?
> > >
> > > Thanks,
> > > Pavel
> > >
> > >
> > > On Wed, Jun 13, 2018 at 5:34 PM, Igor Sapego 
> wrote:
> > >
> > > > Hi, Igniters,
> > > >
> > > > Currently, I'm working on the thin C++ client implementation.
> > > > As you may already know, there is an issue with latency in our
> > > > thin clients, which also can result in performance issues (you
> > > > can see the "About Ignite Thin client performance" thread on
> > > > user list).
> > > >
> > > > So, how about we implement some kind of "Best Effort Affinity"
> > > > for our thin clients? In my opinion, this could be possible and
> > > > may improve mean latency when using thin clients dramatically.
> > > >
> > > > The scenario is following:
> > > > 1. Thin client connects to one of the node from the provided
> > > > address list, just as now.
> > > > 2. When user create instance of CacheClient, thin client
> > > > requests partition mapping for the cache.
> > > > 3. Client establishes connections to nodes, which are both in the
> > > > list, provided by user and in a server node response.
> > > > 4. When user makes put/get/some other cache operation,
> > > > thin client makes the best effort to send the request to the node,
> > > > which stores the data.
> > > > 5. To update partition mapping, thin client can provide public API,
> > > > or do it with some timeout. Also, we can add "miss" flag to cache
> > > > operation response, which whill indicate, that operation was not
> > > > local for the server node and which thin client can use to
> > > > understand, that partition mapping has changed to request server
> > > > node for an update.
> > > >
> > > > What do you think?
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > >
> >
>


Re: Best Effort Affinity for thin clients

2018-06-14 Thread Igor Sapego
Denis, that's right.

Best Regards,
Igor


On Wed, Jun 13, 2018 at 10:58 PM Denis Magda  wrote:

> Pavel,
>
> Most likely the client will be pulling the partitioning map periodically.
> If the local map is outdated, it won't be a big deal because a server node
> that receives a request:
>
>- can redirect it to a map that owns a partition
>- will add an updated partition map to the client's response or will
>turn a special flag in the response suggesting the client do that
> manually.
>
> Igor, is this what you're suggesting?
>
> --
> Denis
>
> On Wed, Jun 13, 2018 at 11:31 AM Pavel Tupitsyn 
> wrote:
>
> > Hi Igor,
> >
> > How can we invoke the affinity function on the client, if we don't have
> the
> > implementation at hand?
> > Am I missing something?
> >
> > Thanks,
> > Pavel
> >
> >
> > On Wed, Jun 13, 2018 at 5:34 PM, Igor Sapego  wrote:
> >
> > > Hi, Igniters,
> > >
> > > Currently, I'm working on the thin C++ client implementation.
> > > As you may already know, there is an issue with latency in our
> > > thin clients, which also can result in performance issues (you
> > > can see the "About Ignite Thin client performance" thread on
> > > user list).
> > >
> > > So, how about we implement some kind of "Best Effort Affinity"
> > > for our thin clients? In my opinion, this could be possible and
> > > may improve mean latency when using thin clients dramatically.
> > >
> > > The scenario is following:
> > > 1. Thin client connects to one of the node from the provided
> > > address list, just as now.
> > > 2. When user create instance of CacheClient, thin client
> > > requests partition mapping for the cache.
> > > 3. Client establishes connections to nodes, which are both in the
> > > list, provided by user and in a server node response.
> > > 4. When user makes put/get/some other cache operation,
> > > thin client makes the best effort to send the request to the node,
> > > which stores the data.
> > > 5. To update partition mapping, thin client can provide public API,
> > > or do it with some timeout. Also, we can add "miss" flag to cache
> > > operation response, which whill indicate, that operation was not
> > > local for the server node and which thin client can use to
> > > understand, that partition mapping has changed to request server
> > > node for an update.
> > >
> > > What do you think?
> > >
> > > Best Regards,
> > > Igor
> > >
> >
>


Re: Best Effort Affinity for thin clients

2018-06-13 Thread Denis Magda
Pavel,

Most likely the client will be pulling the partitioning map periodically.
If the local map is outdated, it won't be a big deal because a server node
that receives a request:

   - can redirect it to a map that owns a partition
   - will add an updated partition map to the client's response or will
   turn a special flag in the response suggesting the client do that manually.

Igor, is this what you're suggesting?

--
Denis

On Wed, Jun 13, 2018 at 11:31 AM Pavel Tupitsyn 
wrote:

> Hi Igor,
>
> How can we invoke the affinity function on the client, if we don't have the
> implementation at hand?
> Am I missing something?
>
> Thanks,
> Pavel
>
>
> On Wed, Jun 13, 2018 at 5:34 PM, Igor Sapego  wrote:
>
> > Hi, Igniters,
> >
> > Currently, I'm working on the thin C++ client implementation.
> > As you may already know, there is an issue with latency in our
> > thin clients, which also can result in performance issues (you
> > can see the "About Ignite Thin client performance" thread on
> > user list).
> >
> > So, how about we implement some kind of "Best Effort Affinity"
> > for our thin clients? In my opinion, this could be possible and
> > may improve mean latency when using thin clients dramatically.
> >
> > The scenario is following:
> > 1. Thin client connects to one of the node from the provided
> > address list, just as now.
> > 2. When user create instance of CacheClient, thin client
> > requests partition mapping for the cache.
> > 3. Client establishes connections to nodes, which are both in the
> > list, provided by user and in a server node response.
> > 4. When user makes put/get/some other cache operation,
> > thin client makes the best effort to send the request to the node,
> > which stores the data.
> > 5. To update partition mapping, thin client can provide public API,
> > or do it with some timeout. Also, we can add "miss" flag to cache
> > operation response, which whill indicate, that operation was not
> > local for the server node and which thin client can use to
> > understand, that partition mapping has changed to request server
> > node for an update.
> >
> > What do you think?
> >
> > Best Regards,
> > Igor
> >
>


Re: Best Effort Affinity for thin clients

2018-06-13 Thread Pavel Tupitsyn
Hi Igor,

How can we invoke the affinity function on the client, if we don't have the
implementation at hand?
Am I missing something?

Thanks,
Pavel


On Wed, Jun 13, 2018 at 5:34 PM, Igor Sapego  wrote:

> Hi, Igniters,
>
> Currently, I'm working on the thin C++ client implementation.
> As you may already know, there is an issue with latency in our
> thin clients, which also can result in performance issues (you
> can see the "About Ignite Thin client performance" thread on
> user list).
>
> So, how about we implement some kind of "Best Effort Affinity"
> for our thin clients? In my opinion, this could be possible and
> may improve mean latency when using thin clients dramatically.
>
> The scenario is following:
> 1. Thin client connects to one of the node from the provided
> address list, just as now.
> 2. When user create instance of CacheClient, thin client
> requests partition mapping for the cache.
> 3. Client establishes connections to nodes, which are both in the
> list, provided by user and in a server node response.
> 4. When user makes put/get/some other cache operation,
> thin client makes the best effort to send the request to the node,
> which stores the data.
> 5. To update partition mapping, thin client can provide public API,
> or do it with some timeout. Also, we can add "miss" flag to cache
> operation response, which whill indicate, that operation was not
> local for the server node and which thin client can use to
> understand, that partition mapping has changed to request server
> node for an update.
>
> What do you think?
>
> Best Regards,
> Igor
>


Best Effort Affinity for thin clients

2018-06-13 Thread Igor Sapego
Hi, Igniters,

Currently, I'm working on the thin C++ client implementation.
As you may already know, there is an issue with latency in our
thin clients, which also can result in performance issues (you
can see the "About Ignite Thin client performance" thread on
user list).

So, how about we implement some kind of "Best Effort Affinity"
for our thin clients? In my opinion, this could be possible and
may improve mean latency when using thin clients dramatically.

The scenario is following:
1. Thin client connects to one of the node from the provided
address list, just as now.
2. When user create instance of CacheClient, thin client
requests partition mapping for the cache.
3. Client establishes connections to nodes, which are both in the
list, provided by user and in a server node response.
4. When user makes put/get/some other cache operation,
thin client makes the best effort to send the request to the node,
which stores the data.
5. To update partition mapping, thin client can provide public API,
or do it with some timeout. Also, we can add "miss" flag to cache
operation response, which whill indicate, that operation was not
local for the server node and which thin client can use to
understand, that partition mapping has changed to request server
node for an update.

What do you think?

Best Regards,
Igor