None of the described approaches provides 100% guarantee of hitting the
primary node in all conditions.
And it is fine to miss a few requests. I don't see a reason to increase
complexity trying to optimize a rare use case.

On Tue, Oct 17, 2023 at 2:49 PM <vlads...@gmail.com> wrote:

> What if topology change event preceedes service redeployment and service
> mapping change? There a possibility for client to save new topology version
> before services are actually redeployed. If we rely on actual change of the
> service mapping (redeployment), there is no such problem.
>
> On 17.10.2023 13:44, Pavel Tupitsyn <ptupit...@apache.org> wrote:
> > I think if it's good enough for cache partition awareness, then it's good
> > enough for services. Topology changes are not that frequent.
> >
> > On Tue, Oct 17, 2023 at 12:22 PM <vlads...@gmail.com> wrote:
> >
> > > Hi, Pavel.
> > >
> > > 1. Correct.
> > > 2. Yes, client watches ClientFlag.AFFINITY_TOPOLOGY_CHANGED flag and
> sends
> > > additional ClientOperation.CLUSTER_GROUP_GET_NODE_ENDPOINTS to get new
> > > cluster topology. Thus, the topology updates with some delay. We could
> > > watch this event somehow in the service proxy. But direct service
> topology
> > > version in the call responses should work faster if service is being
> > > requested. Or you think this is not significant?
> > >
> > >
> > > On 17.10.2023 11:13, Pavel Tupitsyn <ptupit...@apache.org> wrote:
> > >> Hi Vladimir,
> > >>
> > >> 1. A topology of a deployed service can change only when the cluster
> > >> topology changes.
> > >> 2. We already have a topology change flag in every server response.
> > >>
> > >> Therefore, the client can request the topology once per service, and
> > >> refresh it when cluster topology changes, right?
> > >>
> > >>
> > >> On Mon, Oct 16, 2023 at 8:17 PM Vladimir Steshin <vlads...@gmail.com>
> > > wrote:
> > >>
> > >>> Hi Igniters! I propose to add the /service awareness feature to the
> > > thin
> > >>> client/. I remember a couple of users asked of it. Looks nice to have
> > >>> and simple to implement. Similar to the partition awareness.
> > >>> Reason:
> > >>> A service can be deployed only on one or few nodes. Currently, the
> thin
> > >>> client chooses one or a random node to invoke a service. Then, the
> > >>> service call can be always or often redirected to other server node.
> I
> > >>> think we would need: - Bring a new feature to the thin client
> protocol
> > >>> (no protocol version change). - Require the partition awareness flag
> > >>> enabled (it creates required connections to the cluster). - Transfer
> > > the
> > >>> service topology in the service call response (server node /already
> > >>> holds /needed service topology).
> > >>> - Keep the service topology in the client service proxy. If that is
> ok,
> > >>> my question is /how to update service topology on the client/?
> > >>> I see the options: 1) Add a version to the service topology on the
> > >>> server node and on the client service proxy. Add actual service
> > > topology
> > >>> to the service call response if actual>client.
> > >>> /Pros/: Always most actual service top. version
> > >>> /Cons/: Requires holding and syncing top. version on server nodes
> only
> > >>> for the thin clients.
> > >>> 2) Add the actual service topology to the service call response only
> if
> > >>> service is not deployed on the current node. The client invalidates
> > >>> received service topology every N invocations and/or every N seconds
> > >>> (/code constants/).
> > >>> /Pros/: Simple.
> > >>> /Cons/: Actual topology delays. Not the best load balancing.
> > >>> 3) Send from client a hash for the known service nodes UUIDs in every
> > >>> service call request. Add actual service topology to the service call
> > >>> response if the server's hash is not equal.
> > >>> /Pros/: Simple. Always most actual service topology.
> > >>> /Cons/: Costs some CPU sometimes.
> > >>> WDYT?
> > >>>
> > >>
> > >>
> > >
> >
> >
>

Reply via email to