> > > Please don't use this email address for me anymore. Either use
> > > alexander.du...@gmail.com or alexanderdu...@fb.com. I am getting
> > > bounces when I reply to this thread because of the old address.
> >
> > No problem.
> >
> > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > > > index
On Fri, Jan 8, 2021 at 6:04 AM Mike Kravetz wrote:
>
> On 1/5/21 7:49 PM, Liang Li wrote:
> > hugetlb manages its page in hstate's free page list, not in buddy
> > system, this patch try to make it works for hugetlbfs. It canbe
> > used for memory overcommit in virtualizat
> >> On Tue 05-01-21 22:49:21, Liang Li wrote:
> >>> hugetlb manages its page in hstate's free page list, not in buddy
> >>> system, this patch try to make it works for hugetlbfs. It canbe
> >>> used for memory overcommit in virtualization and hugetlb
> > Page reporting isolates free pages temporarily when reporting
> > free pages information. It will reduce the actual free pages
> > and may cause application failed for no enough available memory.
> > This patch try to solve this issue, when there is no free page
> > and page repoting is on
On Thu, Jan 7, 2021 at 12:08 AM Michal Hocko wrote:
>
> On Tue 05-01-21 22:49:21, Liang Li wrote:
> > hugetlb manages its page in hstate's free page list, not in buddy
> > system, this patch try to make it works for hugetlbfs. It canbe
> > used for memory overcommit in vi
> > enum {
> > PAGE_REPORTING_IDLE = 0,
> > @@ -44,7 +45,7 @@ __page_reporting_request(struct page_reporting_dev_info
> > *prdev)
> > * now we are limiting this to running no more than once every
> > * couple of seconds.
> > */
> > -
> So you are going to need a lot more explanation for this. Page
> reporting already had the concept of batching as you could only scan
> once every 2 seconds as I recall. Thus the "PAGE_REPORTING_DELAY". The
> change you are making doesn't make any sense without additional
> context.
The reason
On Wed, Jan 6, 2021 at 5:41 PM David Hildenbrand wrote:
>
> On 06.01.21 04:46, Liang Li wrote:
> > A typical usage of hugetlbfs it's to reserve amount of memory
> > during the kernel booting stage, and the reserved pages are
> > unlikely to return to the buddy syste
Williamson
Cc: Michael S. Tsirkin
Cc: Jason Wang
Cc: Liang Li
Signed-off-by: Liang Li
---
include/linux/page-flags.h | 12 ++
mm/Kconfig | 10 ++
mm/huge_memory.c | 3 +-
mm/hugetlb.c | 243 +
mm/memory.c
add support for reporting free hugepage to
host when guest use hugetlbfs.
Cc: Alexander Duyck
Cc: Mel Gorman
Cc: Andrea Arcangeli
Cc: Dan Williams
Cc: Dave Hansen
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Cc: Liang Li
Signed-off
: David Hildenbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Cc: Liang Li
Signed-off-by: Liang Li
---
include/linux/hugetlb.h| 3 +
include/linux/page_reporting.h | 4 +
mm/Kconfig | 1 +
mm/hugetlb.c
it is done.
Cc: Alexander Duyck
Cc: Mel Gorman
Cc: Andrea Arcangeli
Cc: Dan Williams
Cc: Dave Hansen
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Cc: Liang Li
Signed-off-by: Liang Li
---
include/linux/hugetlb.h | 2 ++
mm/hugetlb.c
Hildenbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Cc: Liang Li
Signed-off-by: Liang Li
---
drivers/virtio/virtio_balloon.c | 3 +++
include/linux/page_reporting.h | 3 +++
mm/page_reporting.c | 13 +
mm/page_reporting.h
threshold to control the
waking up of reporting worker.
Cc: Alexander Duyck
Cc: Mel Gorman
Cc: Andrea Arcangeli
Cc: Dan Williams
Cc: Dave Hansen
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Cc: Liang Li
Signed-off-by: Liang Li
of the 'buddy
free page pre zero out' feature brings, I remove it from this
serier.
Liang Li (6):
mm: Add batch size for free page reporting
mm: let user decide page reporting option
hugetlb: add free page reporting support
hugetlb: avoid allocation failed when page reporting is on going
> >> That‘s mostly already existing scheduling logic, no? (How many vms can I
> >> put onto a specific machine eventually)
> >
> > It depends on how the scheduling component is designed. Yes, you can put
> > 10 VMs with 4C8G(4CPU, 8G RAM) on a host and 20 VMs with 2C4G on
> > another one. But if
On Tue, Jan 5, 2021 at 5:30 PM David Hildenbrand wrote:
>
> On 05.01.21 10:20, Michal Hocko wrote:
> > On Mon 04-01-21 15:00:31, Dave Hansen wrote:
> >> On 1/4/21 12:11 PM, David Hildenbrand wrote:
> Yeah, it certainly can't be the default, but it *is* useful for
> thing where we know
> >>> In our production environment, there are three main applications have such
> >>> requirement, one is QEMU [creating a VM with SR-IOV passthrough device],
> >>> anther other two are DPDK related applications, DPDK OVS and SPDK vhost,
> >>> for best performance, they populate memory when
On Mon, Jan 4, 2021 at 8:56 PM Michal Hocko wrote:
>
> On Mon 21-12-20 11:25:22, Liang Li wrote:
> [...]
> > Security
> >
> > This is a weak version of "introduce init_on_alloc=1 and init_on_free=1
> > boot options", which zero out page in a asy
> > Win or not depends on its effect. For our case, it solves the issue
> > that we faced, so it can be thought as a win for us. If others don't
> > have the issue we faced, the result will be different, maybe they will
> > be affected by the side effect of this feature. I think this is your
> >
> >>> +static int
> >>> +hugepage_reporting_cycle(struct page_reporting_dev_info *prdev,
> >>> + struct hstate *h, unsigned int nid,
> >>> + struct scatterlist *sgl, unsigned int *offset)
> >>> +{
> >>> + struct list_head *list =
> > > > + spin_lock_irq(_lock);
> > > > +
> > > > + if (huge_page_order(h) > MAX_ORDER)
> > > > + budget = HUGEPAGE_REPORTING_CAPACITY;
> > > > + else
> > > > + budget = HUGEPAGE_REPORTING_CAPACITY * 32;
> > >
> > > Wouldn't huge_page_order always be
On Wed, Dec 23, 2020 at 4:41 PM David Hildenbrand wrote:
>
> [...]
>
> >> I was rather saying that for security it's of little use IMHO.
> >> Application/VM start up time might be improved by using huge pages (and
> >> pre-zeroing these). Free page reporting might be improved by using
> >>
> On 12/21/20 11:46 PM, Liang Li wrote:
> > Free page reporting only supports buddy pages, it can't report the
> > free pages reserved for hugetlbfs case. On the other hand, hugetlbfs
> > is a good choice for a system with a huge amount of RAM, because it
> > ca
> On 12/22/20 11:59 AM, Alexander Duyck wrote:
> > On Mon, Dec 21, 2020 at 11:47 PM Liang Li
> > wrote:
> >> +
> >> + if (huge_page_order(h) > MAX_ORDER)
> >> + budget = HUGEPAGE_REPORTING_CAPACITY;
> >> + else
>
> > +hugepage_reporting_cycle(struct page_reporting_dev_info *prdev,
> > +struct hstate *h, unsigned int nid,
> > +struct scatterlist *sgl, unsigned int *offset)
> > +{
> > + struct list_head *list = >hugepage_freelists[nid];
> > +
> > =
> > QEMU use 4K pages, THP is off
> > round1 round2 round3
> > w/o this patch:23.5s 24.7s 24.6s
> > w/ this patch: 10.2s 10.3s 11.2s
> >
> > QEMU use 4K pages, THP is on
> >
https://static.sched.com/hosted_files/kvmforum2020/51/The%20Practice%20Method%20to%20Speed%20Up%2010x%20Boot-up%20Time%20for%20Guest%20in%20Alibaba%20Cloud.pdf
> >
> > and the flowing link is mine:
> >
> > Free page reporting in virtio-balloon doesn't give you any guarantees
> > regarding zeroing of pages. Take a look at the QEMU implementation -
> > e.g., with vfio all reports are simply ignored.
> >
> > Also, I am not sure if mangling such details ("zeroing of pages") into
> > the page
On Tue, Dec 22, 2020 at 4:28 PM David Hildenbrand wrote:
>
> On 22.12.20 08:48, Liang Li wrote:
> > Free page reporting only supports buddy pages, it can't report the
> > free pages reserved for hugetlbfs case. On the other hand, hugetlbfs
>
> The virtio-balloon free
On Tue, Dec 22, 2020 at 4:47 PM David Hildenbrand wrote:
>
> On 21.12.20 17:25, Liang Li wrote:
> > The first version can be found at: https://lkml.org/lkml/2020/4/12/42
> >
> > Zero out the page content usually happens when allocating pages with
> > the flag
for the virtio spec are needed.
Before that, I need the feedback of the comunity about this new feature.
This RFC is baed on my previous series:
'[RFC v2 PATCH 0/4] speed up page allocation for __GFP_ZERO'
Liang Li (3):
mm: support hugetlb free page reporting
virtio-balloon: add support
for the virtio spec are needed.
Before that, I need the feedback of the comunity about this new feature.
This RFC is baed on my previous series:
'[RFC v2 PATCH 0/4] speed up page allocation for __GFP_ZERO'
Liang Li (3):
mm: support hugetlb free page reporting
virtio-balloon: add support
Williamson
Cc: Michael S. Tsirkin
Cc: Jason Wang
Cc: Mike Kravetz
Cc: Liang Li
Signed-off-by: Liang Li
---
mm/page_prezero.c | 17 +
1 file changed, 17 insertions(+)
diff --git a/mm/page_prezero.c b/mm/page_prezero.c
index c8ce720bfc54..dff4e0adf402 100644
--- a/mm
: Alex Williamson
Cc: Michael S. Tsirkin
Cc: Jason Wang
Cc: Mike Kravetz
Cc: Liang Li
Signed-off-by: Liang Li
---
drivers/virtio/virtio_balloon.c | 61 +
include/uapi/linux/virtio_balloon.h | 1 +
2 files changed, 62 insertions(+)
diff --git a/drivers/virtio
Hildenbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Cc: Jason Wang
Cc: Mike Kravetz
Cc: Liang Li
Signed-off-by: Liang Li
---
include/linux/hugetlb.h| 3 +
include/linux/page_reporting.h | 5 +
mm/hugetlb.c | 29
mm
, changes for the virtio spec are needed.
Before that, I need the feedback of the comunity about this new feature.
Liang Li (3):
mm: support hugetlb free page reporting
virtio-balloon: add support for providing free huge page reports to
host
mm: support free hugepage pre zero out
to reduce cache
pollution.
To make the whole function works, support of pre zero out free huge pages
should be added for hugetlbfs, I will send another patch for it.
Liang Li (4):
mm: let user decide page reporting option
mm: pre zero out free pages to speed up page allocation for __GFP_ZERO
Hildenbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Signed-off-by: Liang Li
---
drivers/virtio/virtio_balloon.c | 3 +++
include/linux/page_reporting.h | 3 +++
mm/page_reporting.c | 18 ++
mm/page_reporting.h
nbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Signed-off-by: Liang Li
---
include/linux/highmem.h| 31 +++-
include/linux/page-flags.h | 16 +-
include/trace/events/mmflags.h | 7 +
mm/Kconfig | 10 ++
m
to control the
waking up of reporting worker.
Cc: Alexander Duyck
Cc: Mel Gorman
Cc: Andrea Arcangeli
Cc: Dan Williams
Cc: Dave Hansen
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Signed-off-by: Liang Li
---
mm/page_reporting.c | 2 ++
mm
voluntarily if needed.
Cc: Alexander Duyck
Cc: Mel Gorman
Cc: Andrea Arcangeli
Cc: Dan Williams
Cc: Dave Hansen
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Alex Williamson
Cc: Michael S. Tsirkin
Signed-off-by: Liang Li
---
mm/page_reporting.c | 35
This patch exposes 5 level page table feature to the VM,
at the same time, the canonical virtual address checking is
extended to support both 48-bits and 57-bits address width,
it's the prerequisite to support 5 level paging guest.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc:
This patch exposes 5 level page table feature to the VM,
at the same time, the canonical virtual address checking is
extended to support both 48-bits and 57-bits address width,
it's the prerequisite to support 5 level paging guest.
Signed-off-by: Liang Li
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc
once the hardware supports, and this is not a good choice because
5 level EPT requires more memory access comparing to use 4 level EPT.
The right thing is to use 5 level EPT only when it's needed, will
change in the future version.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Thomas Gl
Now we have 4 level page table and 5 level page table in 64 bits
long mode, let's rename the PT64_ROOT_LEVEL to PT64_ROOT_4LEVEL,
then we can use PT64_ROOT_5LEVEL for 5 level page table, it's
helpful to make the code more clear.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Thomas Gl
once the hardware supports, and this is not a good choice because
5 level EPT requires more memory access comparing to use 4 level EPT.
The right thing is to use 5 level EPT only when it's needed, will
change in the future version.
Signed-off-by: Liang Li
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc
Now we have 4 level page table and 5 level page table in 64 bits
long mode, let's rename the PT64_ROOT_LEVEL to PT64_ROOT_4LEVEL,
then we can use PT64_ROOT_5LEVEL for 5 level page table, it's
helpful to make the code more clear.
Signed-off-by: Liang Li
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc
level page table, with both the EPT and shadow page support. I just
covered the booting process, the guest can boot successfully.
Some parts of this patchset can be improved. Any comments on the design
or the patches would be appreciated.
Liang Li (4):
x86: Add the new CPUID and CR4 bits for 5
-by: Liang Li <liang.z...@intel.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
Cc: Dave Hansen <dave.han...@linux.intel.com>
Cc: Xiao Guangrong <guangrong.x...@linux.in
-by: Liang Li
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Kirill A. Shutemov
Cc: Dave Hansen
Cc: Xiao Guangrong
Cc: Paolo Bonzini
Cc: "Radim Kr??m"
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/uapi/asm/processor-flags.h | 2 ++
2 files changed, 3 insertion
level page table, with both the EPT and shadow page support. I just
covered the booting process, the guest can boot successfully.
Some parts of this patchset can be improved. Any comments on the design
or the patches would be appreciated.
Liang Li (4):
x86: Add the new CPUID and CR4 bits for 5
Add a new feature which supports sending the page information
with range array. The current implementation uses PFNs array,
which is not very efficient. Using ranges can improve the
performance of inflating/deflating significantly.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Mic
.
And the hypervisor can get some of guest's runtime information
through this virtual queue too, e.g. the guest's unused page
information, which can be used for live migration optimization.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Mel
Add a new feature which supports sending the page information
with range array. The current implementation uses PFNs array,
which is not very efficient. Using ranges can improve the
performance of inflating/deflating significantly.
Signed-off-by: Liang Li
Cc: Michael S. Tsirkin
Cc: Paolo
.
And the hypervisor can get some of guest's runtime information
through this virtual queue too, e.g. the guest's unused page
information, which can be used for live migration optimization.
Signed-off-by: Liang Li
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia
pages, this is very helpful to reduce the network traffic and speed
up the live migration process.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Mel Gorman <mgor...@techsingularity.net>
Cc: Michael S. Tsirkin <m...@redhat.c
pages, this is very helpful to reduce the network traffic and speed
up the live migration process.
Signed-off-by: Liang Li
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
Cc: Dave Hansen
Cc: Andrea Arcangeli
Cc: David Hildenbrand
pfn|length} down the
road. balloon_pfn_to_page() can be removed because it's useless.
Signed-off-by: Liang Li <liang.z...@intel.com>
Signed-off-by: Michael S. Tsirkin <m...@redhat.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Cornelia Huck <cornelia.h...@de.ibm.com>
Cc: Amit
pfn|length} down the
road. balloon_pfn_to_page() can be removed because it's useless.
Signed-off-by: Liang Li
Signed-off-by: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
Cc: Dave Hansen
Cc: Andrea Arcangeli
Cc: David Hildenbrand
---
drivers/virtio/virtio_balloon.c |
new feature, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.
TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.
Signed-off-by: Liang Li <liang.z...@intel.com>
Suggested-by: Michael S.
* Use a unified way to send the free page information with the bitmap
* Address the issues referred in MST's comments
Liang Li (5):
virtio-balloon: rework deflate to add page to a list
virtio-balloon: define new feature bit and head struct
virtio-balloon: speed up inflate/deflate process
new feature, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.
TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.
Signed-off-by: Liang Li
Suggested-by: Michael S. Tsirkin
Cc: Michael S. Tsirkin
* Use a unified way to send the free page information with the bitmap
* Address the issues referred in MST's comments
Liang Li (5):
virtio-balloon: rework deflate to add page to a list
virtio-balloon: define new feature bit and head struct
virtio-balloon: speed up inflate/deflate process
about the page bitmap. e.g. the page size, page bitmap length and
start pfn.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Michael S. Tsirkin <m...@redhat.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Cornelia Huck <cornelia.h...@de.ibm.com>
Cc: Amit Shah <ami
pages, this is very helpful to reduce the network traffic and speed
up the live migration process.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Mel Gorman <mgor...@techsingularity.net>
Cc: Michael S. Tsirkin <m...@redhat.c
e bitmap
* Address the issues referred in MST's comments
Liang Li (5):
virtio-balloon: rework deflate to add page to a list
virtio-balloon: define new feature bit and head struct
virtio-balloon: speed up inflate/deflate process
virtio-balloon: define flags and head for host request vq
virtio
.
And the hypervisor can get some of guest's runtime information
through this virtual queue too, e.g. the guest's unused page
information, which can be used for live migration optimization.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Mel
about the page bitmap. e.g. the page size, page bitmap length and
start pfn.
Signed-off-by: Liang Li
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
Cc: Dave Hansen
---
include/uapi/linux/virtio_balloon.h | 19 +++
1 file changed, 19 insertions(+)
diff
pages, this is very helpful to reduce the network traffic and speed
up the live migration process.
Signed-off-by: Liang Li
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
Cc: Dave Hansen
---
drivers/virtio/virtio_balloon.c | 126
e bitmap
* Address the issues referred in MST's comments
Liang Li (5):
virtio-balloon: rework deflate to add page to a list
virtio-balloon: define new feature bit and head struct
virtio-balloon: speed up inflate/deflate process
virtio-balloon: define flags and head for host request vq
virtio
.
And the hypervisor can get some of guest's runtime information
through this virtual queue too, e.g. the guest's unused page
information, which can be used for live migration optimization.
Signed-off-by: Liang Li
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia
ead of the PFN array,
which will allow faster notifications using a bitmap down the road.
balloon_pfn_to_page() can be removed because it's useless.
Signed-off-by: Liang Li <liang.z...@intel.com>
Signed-off-by: Michael S. Tsirkin <m...@redhat.com>
Cc: Paolo Bonzini <pbonz...@redhat.com&
ture, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.
TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.
Signed-off-by: Liang Li <liang.z...@intel.com>
Suggested-by: Michael S. Tsirkin <m.
ead of the PFN array,
which will allow faster notifications using a bitmap down the road.
balloon_pfn_to_page() can be removed because it's useless.
Signed-off-by: Liang Li
Signed-off-by: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
Cc: Dave Hansen
---
drivers/vir
ture, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.
TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.
Signed-off-by: Liang Li
Suggested-by: Michael S. Tsirkin
Cc: Michael S. Tsirkin
Cc: P
ture, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.
TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.
Signed-off-by: Liang Li <liang.z...@intel.com>
Suggested-by: Michael S. Tsirkin <m.
ture, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.
TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.
Signed-off-by: Liang Li
Suggested-by: Michael S. Tsirkin
Cc: Michael S. Tsirkin
Cc: P
.
And the hypervisor can get some of guest's runtime information
through this virtual queue too, e.g. the guest's unused page
information, which can be used for live migration optimization.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Mel
can be corrected by the dirty
page logging mechanism.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Mel Gorman <mgor...@techsingularity.net>
Cc: Michael S. Tsirkin <m...@redhat.com>
Cc: Paolo Bonzini <pbonz...@re
Expose the function to get the max pfn, so it can be used in the
virtio-balloon device driver. Simply include the 'linux/bootmem.h'
is not enough, if the device driver is built to a module, directly
refer the max_pfn lead to build failed.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc:
can be corrected by the dirty
page logging mechanism.
Signed-off-by: Liang Li
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
Cc: Dave Hansen
---
include/linux/mm.h | 2 ++
mm/page_alloc.c| 85
Expose the function to get the max pfn, so it can be used in the
virtio-balloon device driver. Simply include the 'linux/bootmem.h'
is not enough, if the device driver is built to a module, directly
refer the max_pfn lead to build failed.
Signed-off-by: Liang Li
Cc: Andrew Morton
Cc: Mel Gorman
.
And the hypervisor can get some of guest's runtime information
through this virtual queue too, e.g. the guest's unused page
information, which can be used for live migration optimization.
Signed-off-by: Liang Li
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia
process.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Michael S. Tsirkin <m...@redhat.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Cornelia Huck <cornelia.h...@de.ibm.com>
Cc: Amit Shah <amit.s...@redhat.com>
Cc: Dave Hansen <dave.han...@intel.com>
---
d
process.
Signed-off-by: Liang Li
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
Cc: Dave Hansen
---
drivers/virtio/virtio_balloon.c | 128 +---
1 file changed, 121 insertions(+), 7 deletions(-)
diff --git a/drivers/virtio
api head file.
* Use a new way to determine the page bitmap size.
* Use a unified way to send the free page information with the bitmap
* Address the issues referred in MST's comments
Liang Li (7):
virtio-balloon: rework deflate to add page to a list
virtio-balloon: define new fea
about the page bitmap. e.g. the page size, page bitmap length and
start pfn.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Michael S. Tsirkin <m...@redhat.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Cornelia Huck <cornelia.h...@de.ibm.com>
Cc: Amit Shah <ami
api head file.
* Use a new way to determine the page bitmap size.
* Use a unified way to send the free page information with the bitmap
* Address the issues referred in MST's comments
Liang Li (7):
virtio-balloon: rework deflate to add page to a list
virtio-balloon: define new fea
about the page bitmap. e.g. the page size, page bitmap length and
start pfn.
Signed-off-by: Liang Li
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
Cc: Dave Hansen
---
include/uapi/linux/virtio_balloon.h | 19 +++
1 file changed, 19 insertions(+)
diff
ead of the PFN array,
which will allow faster notifications using a bitmap down the road.
balloon_pfn_to_page() can be removed because it's useless.
Signed-off-by: Liang Li <liang.z...@intel.com>
Signed-off-by: Michael S. Tsirkin <m...@redhat.com>
Cc: Paolo Bonzini <pbonz...@redhat.com&
ead of the PFN array,
which will allow faster notifications using a bitmap down the road.
balloon_pfn_to_page() can be removed because it's useless.
Signed-off-by: Liang Li
Signed-off-by: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
Cc: Dave Hansen
---
drivers/vir
. And the VMM hypervisor can get some
of guest's runtime information through this virtual queue, e.g. the
guest's unused page information, which can be used for live migration
optimization.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc: Michael S. Tsirkin <m...@redhat.com>
Cc: Paolo Bo
ture, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.
TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.
Signed-off-by: Liang Li <liang.z...@intel.com>
Suggested-by: Michael S. Tsirkin <m.
Expose the function to get the max pfn, so it can be used in the
virtio-balloon device driver. Simply include the 'linux/bootmem.h'
is not enough, if the device driver is built to a module, directly
refer the max_pfn lead to build failed.
Signed-off-by: Liang Li <liang.z...@intel.com>
Cc:
Support the request for vm's unused page information, response with
a page bitmap. QEMU can make use of this bitmap and the dirty page
logging mechanism to skip the transportation of these unused pages,
this is very helpful to speed up the live migration process.
Signed-off-by: Liang Li <lian
. And the VMM hypervisor can get some
of guest's runtime information through this virtual queue, e.g. the
guest's unused page information, which can be used for live migration
optimization.
Signed-off-by: Liang Li
Cc: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
---
include/uapi
ture, inflating the
balloon to 7GB of a 8GB idle guest only takes 590ms, the
performance improvement is about 85%.
TODO: optimize stage a by allocating/freeing a chunk of pages
instead of a single page at a time.
Signed-off-by: Liang Li
Suggested-by: Michael S. Tsirkin
Cc: Michael S. Tsirkin
Cc: P
Expose the function to get the max pfn, so it can be used in the
virtio-balloon device driver. Simply include the 'linux/bootmem.h'
is not enough, if the device driver is built to a module, directly
refer the max_pfn lead to build failed.
Signed-off-by: Liang Li
Cc: Andrew Morton
Cc: Mel Gorman
Support the request for vm's unused page information, response with
a page bitmap. QEMU can make use of this bitmap and the dirty page
logging mechanism to skip the transportation of these unused pages,
this is very helpful to speed up the live migration process.
Signed-off-by: Liang Li
Cc
1 - 100 of 228 matches
Mail list logo