On Mon, Mar 2, 2020, 5:38 AM Zhoujian (jay) <jianjay.z...@huawei.com> wrote:

>
>
> > -----Original Message-----
> > From: Peter Feiner [mailto:pfei...@google.com]
> > Sent: Saturday, February 22, 2020 8:19 AM
> > To: Junaid Shahid <juna...@google.com>
> > Cc: Ben Gardon <bgar...@google.com>; Zhoujian (jay)
> > <jianjay.z...@huawei.com>; Peter Xu <pet...@redhat.com>;
> > k...@vger.kernel.org; qemu-devel@nongnu.org; pbonz...@redhat.com;
> > dgilb...@redhat.com; quint...@redhat.com; Liujinsong (Paul)
> > <liu.jins...@huawei.com>; linfeng (M) <linfen...@huawei.com>; wangxin
> (U)
> > <wangxinxin.w...@huawei.com>; Huangweidong (C)
> > <weidong.hu...@huawei.com>
> > Subject: Re: RFC: Split EPT huge pages in advance of dirty logging
> >
> > On Fri, Feb 21, 2020 at 2:08 PM Junaid Shahid <juna...@google.com>
> wrote:
> > >
> > > On 2/20/20 9:34 AM, Ben Gardon wrote:
> > > >
> > > > FWIW, we currently do this eager splitting at Google for live
> > > > migration. When the log-dirty-memory flag is set on a memslot we
> > > > eagerly split all pages in the slot down to 4k granularity.
> > > > As Jay said, this does not cause crippling lock contention because
> > > > the vCPU page faults generated by write protection / splitting can
> > > > be resolved in the fast page fault path without acquiring the MMU
> lock.
> > > > I believe +Junaid Shahid tried to upstream this approach at some
> > > > point in the past, but the patch set didn't make it in. (This was
> > > > before my time, so I'm hoping he has a link.) I haven't done the
> > > > analysis to know if eager splitting is more or less efficient with
> > > > parallel slow-path page faults, but it's definitely faster under the
> > > > MMU lock.
> > > >
> > >
> > > I am not sure if we ever posted those patches upstream. Peter Feiner
> would
> > know for sure. One notable difference in what we do compared to the
> approach
> > outlined by Jay is that we don't rely on tdp_page_fault() to do the
> splitting. So we
> > don't have to create a dummy VCPU and the specialized split function is
> also
> > much faster.
> >
> > We've been carrying these patches since 2015. I've never posted them.
> > Getting them in shape for upstream consumption will take some work. I
> can look
> > into this next week.
>
> Hi Peter Feiner,
>
> May I ask any new updates about your plan? Sorry to disturb.
>


Hi Jay,

I've been sick since I sent my last email, so I haven't gotten to this
patch set yet. I'll send it in the next week or two.

Peter


> Regards,
> Jay Zhou
>

Reply via email to