Daniel P. Berrangé <berra...@redhat.com> writes: > On Wed, Jun 27, 2018 at 03:13:57PM +0200, Markus Armbruster wrote: >> Monitor behavior changes even when the client rejects capability "oob". >> >> Traditionally, the monitor reads, executes and responds to one command >> after the other. If the client sends in-band commands faster than the >> server can execute them, the kernel will eventually refuse to buffer >> more, and sending blocks or fails with EAGAIN. >> >> To make OOB possible, we need to read and queue commands as we receive >> them. If the client sends in-band commands faster than the server can >> execute them, the server will eventually drop commands to limit the >> queue length. The sever sends event COMMAND_DROPPED then. >> >> However, we get the new behavior even when the client rejects capability >> "oob". We get the traditional behavior only when the server doesn't >> offer "oob". >> >> Is this what we want? > > IMHO the key benefit of allowing the client to reject the capability > is to enable backwards compat support. So this behaviour feels wrong. > Rejecting OOB should have same semantics as previous QEMU's without > OOB available, otherwise we now have 3 distinct ways the monitor > operates (no OOB, OOB rejected, OOB accepted). This can only ever > lead to more bugs due to lack of testing of no OOB vs OOB rejected > scenarios.
Agreed. We have three configuration cases * OOB not offered (because MUX) * OOB offered, but rejected by client * OOB offered and accepted We want to map them to two run time cases * OOB off * OOB on We may use "server offered OOB" only for configuration purposes. Keep that in mind when reading my reply to Peter's reply. Aside: it would be nice to get rid of the configuration case "OOB not offered", but that's for later.