On Tue, Aug 31, 2021 at 02:16:42PM +0100, Daniel P. Berrangé wrote: > On Tue, Aug 31, 2021 at 08:02:39AM -0300, Leonardo Bras wrote: > > Call qio_channel_set_zerocopy(true) in the start of every multifd thread. > > > > Change the send_write() interface of multifd, allowing it to pass down > > flags for qio_channel_write*(). > > > > Pass down MSG_ZEROCOPY flag for sending memory pages, while keeping the > > other data being sent at the default copying approach. > > > > Signed-off-by: Leonardo Bras <leob...@redhat.com> > > --- > > migration/multifd-zlib.c | 7 ++++--- > > migration/multifd-zstd.c | 7 ++++--- > > migration/multifd.c | 9 ++++++--- > > migration/multifd.h | 3 ++- > > 4 files changed, 16 insertions(+), 10 deletions(-) > > > @@ -675,7 +676,8 @@ static void *multifd_send_thread(void *opaque) > > } > > > > if (used) { > > - ret = multifd_send_state->ops->send_write(p, used, > > &local_err); > > + ret = multifd_send_state->ops->send_write(p, used, > > MSG_ZEROCOPY, > > + &local_err); > > I don't think it is valid to unconditionally enable this feature due to the > resource usage implications > > https://www.kernel.org/doc/html/v5.4/networking/msg_zerocopy.html > > "A zerocopy failure will return -1 with errno ENOBUFS. This happens > if the socket option was not set, the socket exceeds its optmem > limit or the user exceeds its ulimit on locked pages." > > The limit on locked pages is something that looks very likely to be > exceeded unless you happen to be running a QEMU config that already > implies locked memory (eg PCI assignment)
Yes it would be great to be a migration capability in parallel to multifd. At initial phase if it's easy to be implemented on multi-fd only, we can add a dependency between the caps. In the future we can remove that dependency when the code is ready to go without multifd. Thanks, -- Peter Xu