Github user vanzin commented on the issue:

    https://github.com/apache/spark/pull/16503
  
    You can make `ask` blocking by waiting for its future (e.g. with 
`ThreadUtils.awaitResult`).
    
    My point of not using `askWithRetry` is that it's basically an unneeded 
API, and a leftover from the akka days that doesn't make sense anymore. It's 
prone to cause deadlocks (exactly because it's blocking), it imposes 
restrictions on the caller (e.g. idempotency) and other things that people 
generally don't pay that much attention to when using it.
    
    If we can remove uses of `askWithRetry` as we find these issues, we can, at 
some point, finally get rid of the API altogether.
    
    > RPC layer doesn't drop message but message can be timeout. 
    
    Yes it can timeout. You can retry it (basically doing what `askWithRetry` 
does) but it should be such an edge case that failing the task should be ok.
    
    If you think about how the RPC layer works when you use `askWithRetry`, 
this is what happens:
    
    - first RPC is sent
    - remote end is blocked on something, RPC is waiting in the queue
    - sender re-sends the RPC
    - lather, rinse, repeat
    - at some point, receive goes through the RPC queue and start responding to 
the RPCs
    - it responds to the *first* RPC above first, sender ignores the answer 
since RPC was timed out
    - lather, rinse, repeat
    - finally the last RPC is responded to and the sender sees the reply
    
    So it's a really expensive way of just doing `ask` with a longer timeout.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to