It's not a problem with default as it's trivial to change.

A simple "round_robin" will work correctly if you have only one thread 
executing client calls. For multiple threads there is no such guarantee 
that next hedging call will get a next slot in the list of active 
subchannels in RoundRobinLoadBalancer.ReadyPicker


I solved this problem by using pluggable LoadBalancer implementation based 
on RoundRobinLoadBalancer

   - Client interceptor creates UUID for the call and puts this value in 
   headers in ClientCall.start()
   - RoundRobinLoadBalancer.ReadyPicker.pickSubchannel iterates over list 
   of subchannels starting from next index and checks  if subchannel's IP 
   address is not in the list of previously used IP addresses. If it's not in 
   the list, puts it there and returns subchannel. List of previously used IP 
   Addresses is stored in the map by UUID generated in previous step. 
   - Client interceptor removes UUID key from map in 
   ClientCall.Listener.onClose()
   

Do you see any problems with this approach ?

>From https://github.com/grpc/proposal/blob/master/A6-client-retries.md :

*Hedged requests should be sent to distinct backends, if possible. To 
facilitate this, the gRPC client will maintain a list of previously used 
backend addresses for each hedged RPC. This list will be passed to the gRPC 
client's local load-balancing policy. The load balancing policy may use 
this information to send the hedged request to an address that was not 
previously used. If all available backend addresses have already been used, 
the load-balancing policy's response is implementation-dependent.*

Are there any plans to add this functionality to java grpc-core ?
  

On Wednesday, November 27, 2019 at 10:26:19 AM UTC-8, Sanjay Pujare wrote:
>
> David,
>
> Are you saying default should be "round_robin"? In case of round_robin and 
> more than one endpoint being available it does behave as you expect.
>
> On Wednesday, November 27, 2019 at 10:20:53 AM UTC-8, David M wrote:
>>
>> Thanks for the answer! I do not think this is a good design. The main 
>> goal for hedging is to execute remote call on a different servers. Running 
>> heading calls on the same server is worse that not having hedging at all.
>> Is there a plan to redesign it ?
>>  
>>
>> On Wednesday, November 20, 2019 at 4:36:33 PM UTC-8, Penn (Dapeng) Zhang 
>> wrote:
>>>
>>> The retry/hedging attempts will always pick an currently available 
>>> endpoint provided by the load balancer. If you are using pick_first load 
>>> balancing (which is default), it will always provide the first endpoint if 
>>> it's available. If you are using round_robin load balancing, then each 
>>> retry/hedging attempts will pick next available endpoint from load 
>>> balancer, but if 2 out of 3 endpoints are unreachable, it will always use 
>>> the only available one. This is by design.
>>>
>>>
>>> On Thursday, November 14, 2019 at 3:20:24 PM UTC-8, David M wrote:
>>>>
>>>> I am using Java gRPC version 1.24.1. When debugging some timed out 
>>>> requests I found out that sometimes client calls all hedging requests ( 3 
>>>> in my case) on the same server endpoint.
>>>> NameResolver.Listener.onAddresses() got updated with 3 distinct 
>>>> endpoints before first call was made.
>>>> Is it a bug in gRPC code ? Even if 2 out of 3 servers were unreachable, 
>>>> having all hedging attempts go to a single endpoint is not the best 
>>>> option. 
>>>>
>>>>
>>>> status-code=DEADLINE_EXCEEDED status-description=deadline exceeded 
>>>> after 9999958644ns. [closed=[], open=[[remote_addr=xxx.com/xxx.xxx.xxx
>>>> .xxx:3112], [remote_addr=xxx.com/xxx.xxx.xxx.xxx:3112], [remote_addr=
>>>> xxx.com/xxx.xxx.xxx.xxx:3112]]]
>>>>
>>>> Thanks,
>>>>     David
>>>>
>>>>
>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/71c39419-f435-4787-babc-d8307f73a28e%40googlegroups.com.

Reply via email to