Hmm, I think the intention was that the flow limit should be released on
Return, independent of Finish. But I can totally believe I implemented it
wrong. Could we just change it to be based on Return?

FWIW by default there is no flow limit, it's only enabled in the Sandstorm
supervisor to defend against an app sending excessive requests that end up
queued in memory elsewhere in the system.

-Kenton

On Tue, Nov 23, 2021 at 11:51 AM Ian Denhardt <i...@zenhack.net> wrote:

> Hey all,
>
> A few days ago one of my co-maintainers (Louis) alerted me to a deadlock
> in the Go implementation. We've pinned down the cause, and while trying
> to figure out how to fix it, I looked into how the C++ implementation
> handles backpressure.
>
> From what I can tell, the only way backpressure is applied is via the
> flow limit, which limits the total size of arguments to in-flight
> incoming calls. The portion of the quota reserved by a call is returned
> to the pool when the call is removed from the questions table, which
> makes sense, since this is when the memory is actually freed.
>
> However, I see two possible deadlocks that could occur because of this.
>
> The one I am less concerned about is one where calls that depend on one
> another go back and forth between vats, until both vats exceed their
> quota and block on oneanother, causing a deadlock. I am less concerned
> about this since it is basically the RPC equivalent of a stack overflow,
> and could be turned from a crash into a thrown exception by adding a
> timeout or such.
>
> The one I'm more worried about comes up in the context of streaming;
> the problematic scenario is as follows:
>
> Alice in vat A is continuously streaming calls to bob in vat B. It
> possible and expected that at some point alice will cause vat B's
> flow limit to be reached, at which point further calls will block
> until some outstanding calls return. Good so far: this backpressure
> is exactly what we want.
>
> The problem arises after the return message arrives at vat A. vat A
> then sends a finish message, but this message is *behind other calls*,
> so it will not reach vat B until vat B reads in all outstanding calls.
> This will never happen, since vat B is waiting on the flow limit.
>
> I don't know how to avoid this problem with the current protocol as
> specified. One way that almost works is for vat A to just send the
> finish message for each streaming call before the next call is sent,
> relying on the -> stream annotations to know what calls it should do
> this for. but this doesn't quite work since vat B is allowed to
> cancel an ongoing call in response to a finish message. Some
> extension to the protocol to allow non-cancelling finish messages
> would solve this.
>
> Is there a solution that I haven't seen? Are there other ways
> of dealing with this in the protocol?
>
> -Ian
>
> --
> You received this message because you are subscribed to the Google Groups
> "Cap'n Proto" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to capnproto+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/capnproto/163768976630.4734.18127071831897488161%40localhost.localdomain
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capnproto+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/capnproto/CAJouXQ%3Dnik3XG3AQ6K%3DOb1Xq-gvrkLDd2Yi--Qz3U%3DmDZDx8Sg%40mail.gmail.com.

Reply via email to