It sounds like what Go is providing today would be equivalent to a KJ API
that returns two promises: once which waits for backpressure, and one which
waits for the RPC return value. In principle, we could provide a new
version of `send()` in C++ which provides this. But if it only recognizes
backpr
On Wed, 24 Nov 2021, at 02:28, Ian Denhardt wrote:
> Quoting Kenton Varda (2021-11-23 19:20:50)
> > Cap'n Proto doesn't provide any backpressure from the underlying TCP
> > connection to the app, except through streaming. If you just make a ton
> > of calls all at once without waiting for returns,
Quoting Kenton Varda (2021-11-23 19:20:50)
>On Tue, Nov 23, 2021 at 5:01 PM Ian Denhardt <[1]i...@zenhack.net>
>wrote:
>
> Ok, I think I get it, let me know if I have this right:
> [...]
>
>Right.
Ok, great. Thanks for your patience.
> Cap'n Proto doesn't provide any back
On Tue, Nov 23, 2021 at 5:01 PM Ian Denhardt wrote:
> Ok, I think I get it, let me know if I have this right:
>
> The correct thing to do is to handle congestion/flow control for
> multiple calls on each object individually, using something like
> the mechanisms provided by the C++ implementiatio
Ok, I think I get it, let me know if I have this right:
The correct thing to do is to handle congestion/flow control for
multiple calls on each object individually, using something like
the mechanisms provided by the C++ implementiation's streaming
construct. This is important so that calls on dif
On Tue, Nov 23, 2021 at 3:59 PM Ian Denhardt wrote:
> What are apps *supposed* to do here? It isn't clear to me where else the
> backpressure is supposed to come from?
>
Apps should cap the number of write()s they have in-flight at once. (`->
stream` helps a lot with this, as it'll automatically
(Adding Louis to cc per his request)
Quoting Kenton Varda (2021-11-23 14:50:20)
>On Tue, Nov 23, 2021 at 12:41 PM Ian Denhardt <[1]i...@zenhack.net>
>wrote:
>
> Wouldn't releasing it on return allow the caller to cause runaway
> memory
> usage by just never sending the fini
On Tue, Nov 23, 2021 at 12:41 PM Ian Denhardt wrote:
> Wouldn't releasing it on return allow the caller to cause runaway memory
> usage by just never sending the finish? the return entry needs to be kept
> around in case calls are pipelined on it, and itself might take up some
> space (arguably i
Quoting Kenton Varda (2021-11-23 13:02:32)
>Hmm, I think the intention was that the flow limit should be released
>on Return, independent of Finish. But I can totally believe I
>implemented it wrong. Could we just change it to be based on Return?
>FWIW by default there is no flow l
Hmm, I think the intention was that the flow limit should be released on
Return, independent of Finish. But I can totally believe I implemented it
wrong. Could we just change it to be based on Return?
FWIW by default there is no flow limit, it's only enabled in the Sandstorm
supervisor to defend a
Hey all,
A few days ago one of my co-maintainers (Louis) alerted me to a deadlock
in the Go implementation. We've pinned down the cause, and while trying
to figure out how to fix it, I looked into how the C++ implementation
handles backpressure.
>From what I can tell, the only way backpressure is
11 matches
Mail list logo