It sounds like what Go is providing today would be equivalent to a KJ API that returns two promises: once which waits for backpressure, and one which waits for the RPC return value. In principle, we could provide a new version of `send()` in C++ which provides this. But if it only recognizes backpressure from the first-hop socket, this could cause more harm than good, causing people to write apps that tend to cause proxy buffer bloat. Backpressure in Cap'n Proto really needs to be based on return messages to handle proxies correctly. This is what `-> stream` provides.
(Note we definitely wouldn't want to stall the whole KJ event loop when one connection has backpressure. Applications may be doing many concurrent tasks on the same event loop.) -Kenton On Wed, Nov 24, 2021 at 6:05 AM Erin Shepherd <erin.sheph...@e43.eu> wrote: > On Wed, 24 Nov 2021, at 02:28, Ian Denhardt wrote: > > Quoting Kenton Varda (2021-11-23 19:20:50) > > Cap'n Proto doesn't provide any backpressure from the underlying TCP > > connection to the app, except through streaming. If you just make a ton > > of calls all at once without waiting for returns, you'll bloat your > > memory with unsent messages. And possibly worse: if the capability > > bounces through a proxy, and you have a fast connection (say, a unix > > socket) to the proxy, but a slow connection from the proxy to the > > eventual destination, you'll end up bloating the proxy's memory. > > This is not true of the Go implementation, which currently blocks > until the message has been written to the socket (we don't currently > treat streaming specially, but presumably we could keep the blocking > API when we add that; I don't think we even need to treat the annotation > specially, we should be able to do it for all calls). So I don't think > this applies to a scenario where both sides of the connection work like > the Go implementation. But I hadn't thought about the proxy issue > (where the proxy might be using a different implementation); thank > you for pointing that out. > > > I guess this is a matter of futures implementation: callback based futures > (like kj) often do not provides such a backpressure mechanism. Othe > solutions (like Go fibers or Rust poll-based futures) can easily provide > such a mechanism, though > > If a concurrency limiter were provided at the capnp-server level (i.e. > allow no more than X RPCs to be in flight at once), I wonder if that would > help things? > > -- > > As to the original topic: I wonder if a message could be added which > advises the peer as to limits on the number of questions and/or the maximum > size of them which may be in flight? The client might then be able to > thrrottle itself and the sender could then reject (immediately, with an > error) any calls once that limit was exceeded. This would keep the socket > unblocked for finish messages > > -- > You received this message because you are subscribed to the Google Groups > "Cap'n Proto" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to capnproto+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/capnproto/19d6b0bb-83d5-4e22-b3d1-eeb2efb10c1b%40www.fastmail.com > <https://groups.google.com/d/msgid/capnproto/19d6b0bb-83d5-4e22-b3d1-eeb2efb10c1b%40www.fastmail.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "Cap'n Proto" group. To unsubscribe from this group and stop receiving emails from it, send an email to capnproto+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/capnproto/CAJouXQmxKav171EdQkf5iP90BRxe2iBmcwgOrG_v4NBUX3y%2BPA%40mail.gmail.com.