On Mon, Sep 23, 2024 at 01:04:17AM +0000, Gonglei (Arei) wrote:
> Hi,
> 
> > -----Original Message-----
> > From: Michael Galaxy [mailto:mgal...@akamai.com]
> > Sent: Monday, September 23, 2024 3:29 AM
> > To: Michael S. Tsirkin <m...@redhat.com>; Peter Xu <pet...@redhat.com>
> > Cc: Gonglei (Arei) <arei.gong...@huawei.com>; qemu-devel@nongnu.org;
> > yu.zh...@ionos.com; elmar.ger...@ionos.com; zhengchuan
> > <zhengch...@huawei.com>; berra...@redhat.com; arm...@redhat.com;
> > lizhij...@fujitsu.com; pbonz...@redhat.com; Xiexiangyou
> > <xiexiang...@huawei.com>; linux-r...@vger.kernel.org; lixiao (H)
> > <lixia...@huawei.com>; jinpu.w...@ionos.com; Wangjialin
> > <wangjiali...@huawei.com>
> > Subject: Re: [PATCH 0/6] refactor RDMA live migration based on rsocket API
> > 
> > Hi All,
> > 
> > I have met with the team from IONOS about their testing on actual IB
> > hardware here at KVM Forum today and the requirements are starting to make
> > more sense to me. I didn't say much in our previous thread because I
> > misunderstood the requirements, so let me try to explain and see if we're 
> > all on
> > the same page. There appears to be a fundamental limitation here with 
> > rsocket,
> > for which I don't see how it is possible to overcome.
> > 
> > The basic problem is that rsocket is trying to present a stream 
> > abstraction, a
> > concept that is fundamentally incompatible with RDMA. The whole point of
> > using RDMA in the first place is to avoid using the CPU, and to do that, 
> > all of the
> > memory (potentially hundreds of gigabytes) need to be registered with the
> > hardware *in advance* (this is how the original implementation works).
> > 
> > The need to fake a socket/bytestream abstraction eventually breaks down =>
> > There is a limit (a few GB) in rsocket (which the IONOS team previous 
> > reported
> > in testing.... see that email), it appears that means that rsocket is only 
> > going to
> > be able to map a certain limited amount of memory with the hardware until 
> > its
> > internal "buffer" runs out before it can then unmap and remap the next batch
> > of memory with the hardware to continue along with the fake bytestream. This
> > is very much sticking a square peg in a round hole. If you were to "relax" 
> > the
> > rsocket implementation to register the entire VM memory space (as my
> > original implementation does), then there wouldn't be any need for rsocket 
> > in
> > the first place.

Yes, some test like this can be helpful.

And thanks for the summary.  That's definitely helpful.

One question from my side (as someone knows nothing on RDMA/rsocket): is
that "a few GBs" limitation a software guard?  Would it be possible that
rsocket provide some option to allow user opt-in on setting that value, so
that it might work for VM use case?  Would that consume similar resources
v.s. the current QEMU impl but allows it to use rsockets with no perf
regressions?

> 
> Thank you for your opinion. You're right. RSocket has encountered 
> difficulties in 
> transferring large amounts of data. We haven't even figured it out yet. 
> Although
> in this practice, we solved several problems with rsocket.
> 
> In our practice, we need to quickly complete VM live migration and the 
> downtime 
> of live migration must be within 50 ms or less. Therefore, we use RDMA, which 
> is 
> an essential requirement. Next, I think we'll do it based on Qemu's native 
> RDMA 
> live migration solution. During this period, we really doubted whether RDMA 
> live 
> migration was really feasible through rsocket refactoring, so the refactoring 
> plan 
> was shelved.

To me, 50ms guaranteed is hard.  I'm personally not sure how much RDMA
helps if that's only about the transport.

I meant, at least I feel like someone would need to work out some general
limitations, like:

https://wiki.qemu.org/ToDo/LiveMigration#Optimize_memory_updates_for_non-iterative_vmstates
https://lore.kernel.org/all/20230317081904.24389-1-xuchuangxc...@bytedance.com/

I also remember we always have outliers that when save()/load() device
states it can simply be slower (takes 100ms or more on a single device; I
think it's normally has kernel/kvm involved).  That one device can already
break the rule, even if happens rarely.

We also haven't looked into multiple other issues during downtime:

  - vm start/stop will invoke notifiers, and notifiers can (in some cases)
    take quite some time to finish

  - some features may enlarge downtime in an unpredictable way, but so far
    we don't yet have full control of it, e.g. pause-before-switchover for
    block layers

There can be other stuff floating, just to provide some examples.  All
these cases I mentioned above are not relevant to transport on its own.

Thanks,

-- 
Peter Xu


Reply via email to