Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
>If we can remove uses of askWithRetry as we find these issues, we can, at
some point, finally get rid of the API altogether.
How do you think about providing a *"blocking"* `ask` in `RpcE
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@vanzin
Thanks a lot for your comment. It's very helpful.
I'll change it to `ask`.
I think it make sense to keep receiver idempotent when handling
`AskPermissionToCommitOutput`, even t
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16503
You can make `ask` blocking by waiting for its future (e.g. with
`ThreadUtils.awaitResult`).
My point of not using `askWithRetry` is that it's basically an unneeded
API, and a leftover from
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@zsxing, @vanzin
Maybe using `ask` in method `canCommit` is not suitable(i think). Because
`ask` returns a Future, but it should be a blocking process to get result of
`AskPermissionToCommitOu
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16503
> In which case the executor will die (see
CoarseGrainedExecutorBackend::onDisconnected).
Yeah. Didn't recall that. Then I agree that using `ask` is better.
---
If your project is set up f
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16503
> It doesn't drop but the connection may be broken
In which case the executor will die (see
`CoarseGrainedExecutorBackend::onDisconnected`).
---
If your project is set up for it, you can re
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16503
> That was the case with akka (I think, not really sure), but the netty RPC
layer doesn't drop messages. The new one is "exactly once".
It doesn't drop but the connection may be broken. `ask
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16503
> The RPC layer only guarantees at-most-once
That was the case with akka (I think, not really sure), but the netty RPC
layer doesn't drop messages. The new one is "exactly once".
---
If you
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16503
Good catch. Looks good to me.
@vanzin The RPC layer only guarantees at-most-once. Retry may be still
helpful in some case, but the receiver should be idempotent. Either the current
change o
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16503
I think this is another case where using `askWithRetry` makes no sense
given the guarantees of the RPC layer.
---
If your project is set up for it, you can reply to this email and have your
reply ap
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@zsxwing @kayousterhout @andrewor14 Could you please help take a look at
this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@mccheah @JoshRosen @ash211 Could you please take look at this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
12 matches
Mail list logo