On Tue, 18 Oct 2022 at 20:21, Daniel P. Berrangé <berra...@redhat.com> wrote:
>
> On Tue, Oct 18, 2022 at 06:55:08PM +0100, Peter Maydell wrote:
> > How is this intended to work? I guess the socket ought to go
> > into some kind of "disconnecting" state, but not actually do
> > a tcp_chr_disconnect() until all the data has been read via
> > tcp_chr_read() and it's finally got an EOF indication back from
> > tcp_chr_recv() ?
>
> Right, this is basically broken by (lack of) design right now.
>
> The main problem here is that we're watching the socket twice.
> One set of callbacks added with io_add_watch_poll, and then
> a second callback added with qio_chanel_create_watch just for
> G_IO_HUP.
>
> We need there to be only 1 callback, and when that callback
> gets  G_IO_IN, it should *ignore* G_IO_HUP until tcp_chr_recv
> returns 0 to indicate EOF. This would cause tcp_chr_read to
> be invoked repeatedly with G_IO_IN | G_IO_HUP, as we read
> "halt\r" one byte at a time.

Makes sense.

I've filed https://gitlab.com/qemu-project/qemu/-/issues/1264 to
track this socket chardev bug.

It did occur to me that there's a potential complication with
the 'server' mode of this chardev: does it need to cope with
a new connection coming into the server socket while the old
fd is still hanging around in this "waiting for the guest to
read it" state? Currently tcp_chr_disconnect_locked() is where
we restart listening for new connections, so QEMU wouldn't
accept any new connection until the guest had got round to
completely draining the data from the old one.

-- PMM

Reply via email to