On Thu, Jul 11, 2024 at 11:12:09AM -0300, Fabiano Rosas wrote:
> What about the QEMUFile traffic? There's an iov in there. I have been
> thinking of replacing some of qemu-file.c guts with calls to
> multifd. Instead of several qemu_put_byte() we could construct an iov
> and give it to multifd for transfering, call multifd_sync at the end and
> get rid of the QEMUFile entirely. I don't have that completely laid out
> at the moment, but I think it should be possible. I get concerned about
> making assumptions on the types of data we're ever going to want to
> transmit. I bet someone thought in the past that multifd would never be
> used for anything other than ram.

Hold on a bit.. there're two things I want to clarity with you.

Firstly, qemu_put_byte() has buffering on f->buf[].  Directly changing them
to iochannels may regress performance.  I never checked, but I would assume
some buffering will be needed for small chunk of data even with iochannels.

Secondly, why multifd has things to do with this?  What you're talking
about is more like the rework of qemufile->iochannel thing to me, and IIUC
that doesn't yet involve multifd.  For many of such conversions, it'll
still be operating on the main channel, which is not the multifd channels.
What matters might be about what's in your mind to be put over multifd
channels there.

> 
> >
> > I wonder why handshake needs to be done per-thread.  I was naturally
> > thinking the handshake should happen sequentially, talking over everything
> > including multifd.
> 
> Well, it would still be thread based. Just that it would be 1 thread and
> it would not be managed by multifd. I don't see the point. We could make
> everything be multifd-based. Any piece of data that needs to reach the
> other side of the migration could be sent through multifd, no?

Hmm.... yes we can. But what do we gain from it, if we know it'll be a few
MBs in total?  There ain't a lot of huge stuff to move, it seems to me.

> 
> Also, when you say "per-thread", that's the model we're trying to get
> away from. There should be nothing "per-thread", the threads just
> consume the data produced by the clients. Anything "per-thread" that is
> not strictly related to the thread model should go away. For instance,
> p->page_size, p->page_count, p->write_flags, p->flags, etc. None of
> these should be in MultiFDSendParams. That thing should be (say)
> MultifdChannelState and contain only the semaphores and control flags
> for the threads.
> 
> It would be nice if we could once and for all have a model that can
> dispatch data transfers without having to fiddle with threading all the
> time. Any time someone wants to do something different in the migration
> code, there it goes a random qemu_create_thread() flying around.

That's exactly what I want to avoid.  Not all things will need a thread,
only performance relevant ones.

So now we have multifd threads, they're for IO throughputs: if we want to
push a fast NIC, that's the only way to go.  Anything wants to push that
NIC, should use multifd.

Then it turns out we want more concurrency, it's about VFIO save()/load()
of the kenrel drivers and it can block.  Same to other devices that can
take time to save()/load() if it can happen concurrently in the future.  I
think that's the reason why I suggested the VFIO solution to provide a
generic concept of thread pool so it services a generic purpose, and can be
reused in the future.

I hope that'll stop anyone else on migration to create yet another thread
randomly, and I definitely don't like that either.  I would _suspect_ the
next one to come as such is TDX.. I remember at least in the very initial
proposal years ago, TDX migration involves its own "channel" to migrate,
migration.c may not even know where is that channel.  We'll see.

[...]

> > One thing to mention is that when with an union we may probably need to get
> > rid of multifd_send_state->pages already.
> 
> Hehe, please don't do this like "oh, by the way...". This is a major
> pain point. I've been complaining about that "holding of client data"
> since the fist time I read that code. So if you're going to propose
> something, it needs to account for that.

The client puts something into a buffer (SendData), then it delivers it to
multifd (who silently switches the buffer).  After enqueued, the client
assumes the buffer is sent and reusable again.

It looks pretty common to me, what is the concern within the procedure?
What's the "holding of client data" issue?

> 
> > The object can't be a global
> > cache (in which case so far it's N+1, N being n_multifd_channels, while "1"
> > is the extra buffer as only RAM uses it).  In the union world we'll need to
> > allocate M+N SendData, where N is still the n_multifd_channels, and M is
> > the number of users, in VFIO's case, VFIO allocates the cached SendData and
> > use that to enqueue, right after enqueue it'll get a free one by switching
> > it with another one in the multifd's array[N].  Same to RAM.  Then there'll
> > be N+2 SendData and VFIO/RAM needs to free their own SendData when cleanup
> > (multifd owns the N per-thread only).
> >
> 
> At first sight, that seems to work. It's similar to this series, but
> you're moving the free slots back into the channels. Should I keep
> SendData as an actual separate array instead of multiple p->data?

I don't know.. they look similar to me yet so far, as long as multifd is
managing the N buffers, while the clients will manage one for each.  There
should have a helper to allocate/free the generic multifd buffers (SendData
in this case) so everyone should be using that.

> 
> Let me know, I'll write some code and see what it looks like.

I think Maciej is working on this too since your absence, as I saw he
decided to base his work on top of yours and he's preparing the new
version. I hope you two won't conflict or duplicates the work.  Might be
good to ask / wait and see how far Maciej has been going.

-- 
Peter Xu


Reply via email to