On Tue, Oct 05, 2021 at 10:31:06AM -0400, Michael S. Tsirkin wrote:
> On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> > On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > > vhost-user-blk in QEMU writing to the socket right after forking off 
> > > > > the
> > > > > storage daemon before it has a chance to come up properly, leaving the
> > > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > > automation failures reported multiple times on the mailing list [1].
> > > > > 
> > > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > > that it is fully initialized and ready to handle client connections by
> > > > > creating a pidfile on initialiation. This ensures that the 
> > > > > storage-daemon
> > > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > > 
> > > > > [1] 
> > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_CAFEAcA8kYpz9LiPNxnWJAPSjc-3Dnv532bEdyfynaBeMeohqBp3A-40mail.gmail.com_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=buSM5F3BMoUQEmVsEOXaCdERM0onDwoqit7nbLblkVs&e=
> > > > >  
> > > > 
> > > 
> > > Hey Stefan,
> > > 
> > > > Hi Raphael,
> > > > I would like to understand the issue that is being worked around in the
> > > > patch.
> > > > 
> > > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > > documentation even contains example code for this
> > > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > > listen fd passing is fundamentally broken.
> > > > 
> > > 
> > > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > > proceed to use the socket before the storage-daemon has a chance to
> > > properly start up and monitor it. This is nothing unique to the
> > > storage-daemon - I've seen races like this happen happend with different
> > > vhost-user backends before.
> > > 
> > > Yes - I do think the docs can be improved to explicitly state that the
> > > storage-daemon must be allowed to properly initialize before any data is
> > > sent over the socket. Maybe we should even perscribe the use of the 
> > > pidfile
> > > option?
> > > 
> > > > Can you share more details about the problem?
> > > > 
> > > 
> > > Did you see my analysis [1]?
> > > 
> > > [1] 
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_20210827165253.GA14291-40raphael-2Ddebian-2Ddev_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=o_S2kKO4RQnWw2QnzVi7dgOwgiPbusI9Zche7mWV22I&e=
> > >  
> > > 
> > > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > > socket and the storage daemon never receives it. Looking at the
> > > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > > the storage-daemon never receives any message to begin with. AFAICT
> > > there is nothing stopping QEMU from running first and sending a message
> > > before vhost-user-blk comes up, and from testing we can see that waiting
> > > for the storage-daemon to come up resolves the problem completely.
> > 
> > The root cause has not been determined yet. QEMU should accept the
> > incoming connection and then read the previously-sent
> > VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> > Sockets API level why the message should get lost, so there is probably
> > a QEMU bug here.
> 
> Right. However the test does randomly hang for people and it's
> not really of interest to anyone. I think we should apply the
> work-around but yes we should keep working on the root cause, too.
>

From my end I have spent some more time looking at it but have not made
much progress. I was hopeful that David HiIdenbrand’s libvhost-user bug
fixes may have resolved it, but I tested and even with his patches I
still see the hang.

I am determined to get to the bottom of it, but I’m not sure how long it
will take. If this is impacting people than I agree with merging the
patch as a workaround.

From my end, I will send a v6 updating the commit message and add
comments to make it clear that the patch is a workaround and the root
cause has not been determined yet. Sound good?

>
> > > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > > or does it mean connect(2)?
> > > > 
> > > 
> > > Yes - it means writing vhost-user messages. We see a message sent from
> > > QEMU to the backend.
> > > 
> > > Note that in qtest_socket_server() (called from create_listen_socket())
> > > we have already called listen() on the socket, so I would expect QEMU
> > > calling connect(2) to succeed and proceed to successfully send messages
> > > whether or not there is another listener. I even tried commenting out the
> > > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > > sends the message and hangs indefinitely.
> > 
> > QEMU is correct in waiting for a vhost-user reply. The question is why
> > qemu-storage-daemon's vhost-user-block export isn't processing the
> > request and replying to it?
> > 
> > > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > > and does not close them? This means the host network stack doesn't
> > > > consider the socket closed after QEMU terminates and therefore the test
> > > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > > to close the fds after spawning qemu-storage-daemon.
> > > > 
> > > 
> > > When the test hangs both QEMU and storage-daemon are still up and
> > > connected to the socket and waiting for messages from each other. I don't
> > > see how we would close the FD in this state or how it would help.
> > 
> > Yes, I see. In that case the theory about fds doesn't apply.
> > 
> > > We may want to think about implementing some kind of timeoout for initial
> > > vhost-user messages so that we fail instead of hang in cases like these,
> > > as I proposed in [1]. What do you think?
> > 
> > Let's hold off on workarounds until the root cause has been found.
> > 
> > Do you have time to debug why vu_accept() and vu_message_read() don't
> > see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?
> > 
> > Thanks,
> > Stefan
> 
> 

Reply via email to