On Wed, Jul 17, 2024 at 11:07:17PM +0200, Maciej S. Szmigiero wrote:
> On 17.07.2024 21:00, Peter Xu wrote:
> > On Tue, Jul 16, 2024 at 10:10:25PM +0200, Maciej S. Szmigiero wrote:
> > > > > > > The comment I removed is slightly misleading to me too, because 
> > > > > > > right now
> > > > > > > active_slot contains the data hasn't yet been delivered to 
> > > > > > > multifd, so
> > > > > > > we're "putting it back to free list" not because of it's free, 
> > > > > > > but because
> > > > > > > we know it won't get used until the multifd send thread consumes 
> > > > > > > it
> > > > > > > (because before that the thread will be busy, and we won't use 
> > > > > > > the buffer
> > > > > > > if so in upcoming send()s).
> > > > > > > 
> > > > > > > And then when I'm looking at this again, I think maybe it's a 
> > > > > > > slight
> > > > > > > overkill, and maybe we can still keep the "opaque data" managed 
> > > > > > > by multifd.
> > > > > > > One reason might be that I don't expect the "opaque data" payload 
> > > > > > > keep
> > > > > > > growing at all: it should really be either RAM or device state as 
> > > > > > > I
> > > > > > > commented elsewhere in a relevant thread, after all it's a thread 
> > > > > > > model
> > > > > > > only for migration purpose to move vmstates..
> > > > > > 
> > > > > > Some amount of flexibility needs to be baked in. For instance, what
> > > > > > about the handshake procedure? Don't we want to use multifd threads 
> > > > > > to
> > > > > > put some information on the wire for that as well?
> > > > > 
> > > > > Is this an orthogonal question?
> > > > 
> > > > I don't think so. You say the payload data should be either RAM or
> > > > device state. I'm asking what other types of data do we want the multifd
> > > > channel to transmit and suggesting we need to allow room for the
> > > > addition of that, whatever it is. One thing that comes to mind that is
> > > > neither RAM or device state is some form of handshake or capabilities
> > > > negotiation.
> > > 
> > > The RFC version of my multifd device state transfer patch set introduced
> > > a new migration channel header (by Avihai) for clean and extensible
> > > migration channel handshaking but people didn't like so it was removed in 
> > > v1.
> > 
> > Hmm, I'm not sure this is relevant to the context of discussion here, but I
> > confess I didn't notice the per-channel header thing in the previous RFC
> > series.  Link is here:
> > 
> > https://lore.kernel.org/r/636cec92eb801f13ba893de79d4872f5d8342097.1713269378.git.maciej.szmigi...@oracle.com
> 
> The channel header patches were dropped because Daniel didn't like them:
> https://lore.kernel.org/qemu-devel/zh-kf72fe9ov6...@redhat.com/
> https://lore.kernel.org/qemu-devel/zh_6w8u3h4fmg...@redhat.com/

Ah I missed that too when I quickly went over the old series, sorry.

I think what Dan meant was that we'd better do that with the handshake
work, which should cover more than this.  I've no problem with that.

It's just that sooner or later, we should provide something more solid than
commit 6720c2b327 ("migration: check magic value for deciding the mapping
of channels").

> 
> > Maciej, if you want, you can split that out of the seriess. So far it looks
> > like a good thing with/without how VFIO tackles it.
> 
> Unfortunately, these Avihai's channel header patches obviously impact wire
> protocol and are a bit of intermingled with the rest of the device state
> transfer patch set so it would be good to know upfront whether there is
> some consensus to (re)introduce this new channel header (CCed Daniel, too).

When I mentioned posting it separately, it'll still not be relevant to the
VFIO series. IOW, I think below is definitely not needed (and I think we're
on the same page now to reuse multifd threads as generic channels, so
there's no issue now):

https://lore.kernel.org/qemu-devel/027695db92ace07d2d6ee66da05f8e85959fd46a.1713269378.git.maciej.szmigi...@oracle.com/

So I assume we should leave that for later for whoever refactors the
handshake process.

Thanks,

-- 
Peter Xu


Reply via email to