Re: [PATCH net-next 12/12] tools/virtio: fix smp_mb on x86

2018-01-25 Thread Jason Wang



On 2018年01月26日 07:36, Michael S. Tsirkin wrote:

Offset 128 overlaps the last word of the redzone.
Use 132 which is always beyond that.

Signed-off-by: Michael S. Tsirkin 
---
  tools/virtio/ringtest/main.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/virtio/ringtest/main.h b/tools/virtio/ringtest/main.h
index 593a328..301d59b 100644
--- a/tools/virtio/ringtest/main.h
+++ b/tools/virtio/ringtest/main.h
@@ -111,7 +111,7 @@ static inline void busy_wait(void)
  }
  
  #if defined(__x86_64__) || defined(__i386__)

-#define smp_mb() asm volatile("lock; addl $0,-128(%%rsp)" ::: "memory", 
"cc")


Just wonder did "rsp" work for __i386__ ?

Thanks


+#define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", 
"cc")
  #else
  /*
   * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [virtio-dev] Re: [PATCH v25 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

2018-01-25 Thread Wei Wang

On 01/26/2018 10:42 AM, Michael S. Tsirkin wrote:

On Fri, Jan 26, 2018 at 09:40:44AM +0800, Wei Wang wrote:

On 01/25/2018 09:49 PM, Michael S. Tsirkin wrote:

On Thu, Jan 25, 2018 at 05:14:06PM +0800, Wei Wang wrote:




The controversy is that the free list is not static
once the lock is dropped, so everything is dynamically changing, including
the state that was recorded. The method we are using is more prudent, IMHO.
How about taking the fundamental solution, and seek to improve incrementally
in the future?


Best,
Wei

I'd like to see kicks happen outside the spinlock. kick with a spinlock
taken looks like a scalability issue that won't be easy to
reproduce but hurt workloads at random unexpected times.



Is that "kick inside the spinlock" the only concern you have? I think we 
can remove the kick actually. If we check how the host side works, it is 
worthwhile to let the host poll the virtqueue after it receives the cmd 
id from the guest (kick for cmd id isn't within the lock).



Best,
Wei
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v24 1/2] mm: support reporting free page blocks

2018-01-25 Thread Wei Wang

On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:

On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:

This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after zone->lock is released, so it is the caller's
responsibility to either detect or prevent the use of such pages.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are reported as free pages but are written after the report function
returns will be captured by the hypervisor, and they will be added to the
next round of memory transfer.

Signed-off-by: Wei Wang 
Signed-off-by: Liang Li 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Acked-by: Michal Hocko 
---
  include/linux/mm.h |  6 
  mm/page_alloc.c| 91 ++
  2 files changed, 97 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff..b3077dd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * 
zones_size,
unsigned long zone_start_pfn, unsigned long *zholes_size);
  extern void free_initmem(void);
  
+extern void walk_free_mem_block(void *opaque,

+   int min_order,
+   bool (*report_pfn_range)(void *opaque,
+unsigned long pfn,
+unsigned long num));
+
  /*
   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
   * into the buddy system. The freed pages will be poisoned with pattern
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688..705de22 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t 
*nodemask)
show_swap_cache_info();
  }
  
+/*

+ * Walk through a free page list and report the found pfn range via the
+ * callback.
+ *
+ * Return false if the callback requests to stop reporting. Otherwise,
+ * return true.
+ */
+static bool walk_free_page_list(void *opaque,
+   struct zone *zone,
+   int order,
+   enum migratetype mt,
+   bool (*report_pfn_range)(void *,
+unsigned long,
+unsigned long))
+{
+   struct page *page;
+   struct list_head *list;
+   unsigned long pfn, flags;
+   bool ret;
+
+   spin_lock_irqsave(>lock, flags);
+   list = >free_area[order].free_list[mt];
+   list_for_each_entry(page, list, lru) {
+   pfn = page_to_pfn(page);
+   ret = report_pfn_range(opaque, pfn, 1 << order);
+   if (!ret)
+   break;
+   }
+   spin_unlock_irqrestore(>lock, flags);
+
+   return ret;
+}

There are two issues with this API. One is that it is not
restarteable: if you return false, you start from the
beginning. So no way to drop lock, do something slow
and then proceed.

Another is that you are using it to report free page hints. Presumably
the point is to drop these pages - keeping them near head of the list
and reusing the reported ones will just make everything slower
invalidating the hint.

How about rotating these pages towards the end of the list?
Probably not on each call, callect reported pages and then
move them to tail when we exit.



I'm not sure how this would help. For example, we have a list of 2M free 
page blocks:

A-->B-->C-->D-->E-->F-->G--H

After reporting A and B, and put them to the end and exit, when the 
caller comes back,

1) if the list remains unchanged, then it will be
C-->D-->E-->F-->G-->H-->A-->B

2) If worse, all the blocks have been split into smaller blocks and used 
after the caller comes back.


where could we continue?


The reason to think about "restart" is the worry about the virtqueue is 
full, right? But we've agreed that losing some hints to report isn't 
important, and in practice, the virtqueue won't be full as the host side 
is faster.


I'm concerned that actions on the free list may cause more controversy 
though it might be safe to do from some aspect, and would be hard to end 
debating. If possible, we could go with the most prudent approach for 
now, and have more discussions in future improvement patches. What would 
you think?








+
+/**
+ * walk_free_mem_block - Walk through the free page blocks in the system
+ * @opaque: the 

Re: [virtio-dev] Re: [PATCH v25 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

2018-01-25 Thread Michael S. Tsirkin
On Fri, Jan 26, 2018 at 09:40:44AM +0800, Wei Wang wrote:
> On 01/25/2018 09:49 PM, Michael S. Tsirkin wrote:
> > On Thu, Jan 25, 2018 at 05:14:06PM +0800, Wei Wang wrote:
> > > +
> > > +static void report_free_page_func(struct work_struct *work)
> > > +{
> > > + struct virtio_balloon *vb;
> > > + int ret;
> > > +
> > > + vb = container_of(work, struct virtio_balloon, report_free_page_work);
> > > +
> > > + /* Start by sending the received cmd id to host with an outbuf */
> > > + ret = send_cmd_id(vb, vb->cmd_id_received);
> > > + if (unlikely(ret))
> > > + goto err;
> > > +
> > > + ret = walk_free_mem_block(vb, 0, _balloon_send_free_pages);
> > > + if (unlikely(ret < 0))
> > > + goto err;
> > > +
> > > + /* End by sending a stop id to host with an outbuf */
> > > + ret = send_cmd_id(vb, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> > > + if (likely(!ret))
> > > + return;
> > > +err:
> > > + dev_err(>vdev->dev, "%s failure: free page vq is broken\n",
> > > + __func__);
> > > +}
> > > +
> > So that's very simple, but it only works well if the whole
> > free list fits in the queue or host processes the queue faster
> > than the guest. What if it doesn't?
> 
> This is the case that the virtqueue gets full, and I think we've agreed that
> this is an optimization feature and losing some hints to report isn't
> important, right?
> 
> Actually, in the tests, there is no chance to see the ring is full. If we
> check the host patches that were shared before, the device side operation is
> quite simple, it just clears the related bits from the bitmap, and then
> continues to take entries from the virtqueue till the virtqueue gets empty.
> 
> 
> > If we had restartability you could just drop the lock
> > and wait for a vq interrupt to make more progress, which
> > would be better I think.
> > 
> 
> Restartability means that caller needs to record the state where it was when
> it stopped last time.

See my comment on the mm patch: if you rotate the previously reported
pages towards the end, then you mostly get restartability for free,
if only per zone.
The only thing remaining will be stopping at a page you already reported.

There aren't many zones so restartability wrt zones is kind of
trivial.

> The controversy is that the free list is not static
> once the lock is dropped, so everything is dynamically changing, including
> the state that was recorded. The method we are using is more prudent, IMHO.
> How about taking the fundamental solution, and seek to improve incrementally
> in the future?
> 
> 
> Best,
> Wei

I'd like to see kicks happen outside the spinlock. kick with a spinlock
taken looks like a scalability issue that won't be easy to
reproduce but hurt workloads at random unexpected times.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v25 1/2 RESEND] mm: support reporting free page blocks

2018-01-25 Thread Wei Wang

On 01/26/2018 06:41 AM, Andrew Morton wrote:

On Thu, 25 Jan 2018 17:38:27 +0800 Wei Wang  wrote:


This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after zone->lock is released, so it is the caller's
responsibility to either detect or prevent the use of such pages.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are reported as free pages but are written after the report function
returns will be captured by the hypervisor, and they will be added to the
next round of memory transfer.

It would be useful if we had some quantitative testing results, so we
can see the real-world benefits from this change?



Sure. Thanks for the reminder, I think I'll also attach this to the 
cover letter:


Without this feature, locally live migrating an 8G idle guest takes 
~2286 ms. With this featrue, it takes ~260 ms, which reduces the 
migration time to ~11%.


Idle guest means a guest which doesn't run any specific workloads after 
boots. The improvement depends on how much free memory the guest has, 
idle guest is a good case to show the improvement. From the optimization 
point of view, having something is better than nothing, IMHO. If the 
guest has less free memory, the improvement will be less, but still 
better than no improvement.


Best,
Wei


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next 12/12] tools/virtio: fix smp_mb on x86

2018-01-25 Thread Michael S. Tsirkin
Offset 128 overlaps the last word of the redzone.
Use 132 which is always beyond that.

Signed-off-by: Michael S. Tsirkin 
---
 tools/virtio/ringtest/main.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/virtio/ringtest/main.h b/tools/virtio/ringtest/main.h
index 593a328..301d59b 100644
--- a/tools/virtio/ringtest/main.h
+++ b/tools/virtio/ringtest/main.h
@@ -111,7 +111,7 @@ static inline void busy_wait(void)
 } 
 
 #if defined(__x86_64__) || defined(__i386__)
-#define smp_mb() asm volatile("lock; addl $0,-128(%%rsp)" ::: "memory", 
"cc")
+#define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", 
"cc")
 #else
 /*
  * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized
-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next 10/12] tools/virtio: more stubs to fix tools build

2018-01-25 Thread Michael S. Tsirkin
Signed-off-by: Michael S. Tsirkin 
---
 tools/virtio/linux/kernel.h  | 2 +-
 tools/virtio/linux/thread_info.h | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)
 create mode 100644 tools/virtio/linux/thread_info.h

diff --git a/tools/virtio/linux/kernel.h b/tools/virtio/linux/kernel.h
index 395521a..fca8381 100644
--- a/tools/virtio/linux/kernel.h
+++ b/tools/virtio/linux/kernel.h
@@ -118,7 +118,7 @@ static inline void free_page(unsigned long addr)
 #define dev_err(dev, format, ...) fprintf (stderr, format, ## __VA_ARGS__)
 #define dev_warn(dev, format, ...) fprintf (stderr, format, ## __VA_ARGS__)
 
-#define WARN_ON_ONCE(cond) ((cond) && fprintf (stderr, "WARNING\n"))
+#define WARN_ON_ONCE(cond) ((cond) ? fprintf (stderr, "WARNING\n") : 0)
 
 #define min(x, y) ({   \
typeof(x) _min1 = (x);  \
diff --git a/tools/virtio/linux/thread_info.h b/tools/virtio/linux/thread_info.h
new file mode 100644
index 000..e0f610d
--- /dev/null
+++ b/tools/virtio/linux/thread_info.h
@@ -0,0 +1 @@
+#define check_copy_size(A, B, C) (1)
-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next 09/12] tools/virtio: switch to __ptr_ring_empty

2018-01-25 Thread Michael S. Tsirkin
We don't rely on lockless guarantees, but it
seems cleaner than inverting __ptr_ring_peek.

Signed-off-by: Michael S. Tsirkin 
---
 tools/virtio/ringtest/ptr_ring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/virtio/ringtest/ptr_ring.c b/tools/virtio/ringtest/ptr_ring.c
index e6e8130..477899c 100644
--- a/tools/virtio/ringtest/ptr_ring.c
+++ b/tools/virtio/ringtest/ptr_ring.c
@@ -187,7 +187,7 @@ bool enable_kick()
 
 bool avail_empty()
 {
-   return !__ptr_ring_peek();
+   return __ptr_ring_empty();
 }
 
 bool use_buf(unsigned *lenp, void **bufp)
-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next 11/12] tools/virtio: copy READ/WRITE_ONCE

2018-01-25 Thread Michael S. Tsirkin
This is to make ptr_ring test build again.

Signed-off-by: Michael S. Tsirkin 
---
 tools/virtio/ringtest/main.h | 57 
 1 file changed, 57 insertions(+)

diff --git a/tools/virtio/ringtest/main.h b/tools/virtio/ringtest/main.h
index 5706e07..593a328 100644
--- a/tools/virtio/ringtest/main.h
+++ b/tools/virtio/ringtest/main.h
@@ -134,4 +134,61 @@ static inline void busy_wait(void)
 barrier(); \
 } while (0)
 
+#if defined(__i386__) || defined(__x86_64__) || defined(__s390x__)
+#define smp_wmb() barrier()
+#else
+#define smp_wmb() smp_release()
+#endif
+
+#ifdef __alpha__
+#define smp_read_barrier_depends() smp_acquire()
+#else
+#define smp_read_barrier_depends() do {} while(0)
+#endif
+
+static __always_inline
+void __read_once_size(const volatile void *p, void *res, int size)
+{
+switch (size) { \
+case 1: *(unsigned char *)res = *(volatile unsigned char *)p; break;   
   \
+case 2: *(unsigned short *)res = *(volatile unsigned short *)p; break; 
   \
+case 4: *(unsigned int *)res = *(volatile unsigned int *)p; break; 
   \
+case 8: *(unsigned long long *)res = *(volatile unsigned long long 
*)p; break;\
+default:\
+barrier();  \
+__builtin_memcpy((void *)res, (const void *)p, size);   \
+barrier();  \
+}   \
+}
+
+static __always_inline void __write_once_size(volatile void *p, void *res, int 
size)
+{
+   switch (size) {
+   case 1: *(volatile unsigned char *)p = *(unsigned char *)res; break;
+   case 2: *(volatile unsigned short *)p = *(unsigned short *)res; break;
+   case 4: *(volatile unsigned int *)p = *(unsigned int *)res; break;
+   case 8: *(volatile unsigned long long *)p = *(unsigned long long *)res; 
break;
+   default:
+   barrier();
+   __builtin_memcpy((void *)p, (const void *)res, size);
+   barrier();
+   }
+}
+
+#define READ_ONCE(x) \
+({ \
+   union { typeof(x) __val; char __c[1]; } __u;\
+   __read_once_size(&(x), __u.__c, sizeof(x)); \
+   smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \
+   __u.__val;  \
+})
+
+#define WRITE_ONCE(x, val) \
+({ \
+   union { typeof(x) __val; char __c[1]; } __u =   \
+   { .__val = (typeof(x)) (val) }; \
+   __write_once_size(&(x), __u.__c, sizeof(x));\
+   __u.__val;  \
+})
+
 #endif
-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v25 1/2 RESEND] mm: support reporting free page blocks

2018-01-25 Thread Andrew Morton
On Thu, 25 Jan 2018 17:38:27 +0800 Wei Wang  wrote:

> This patch adds support to walk through the free page blocks in the
> system and report them via a callback function. Some page blocks may
> leave the free list after zone->lock is released, so it is the caller's
> responsibility to either detect or prevent the use of such pages.
> 
> One use example of this patch is to accelerate live migration by skipping
> the transfer of free pages reported from the guest. A popular method used
> by the hypervisor to track which part of memory is written during live
> migration is to write-protect all the guest memory. So, those pages that
> are reported as free pages but are written after the report function
> returns will be captured by the hypervisor, and they will be added to the
> next round of memory transfer.

It would be useful if we had some quantitative testing results, so we
can see the real-world benefits from this change?

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v24 1/2] mm: support reporting free page blocks

2018-01-25 Thread Michael S. Tsirkin
On Thu, Jan 25, 2018 at 09:56:01AM -0500, Pankaj Gupta wrote:
> 
> > 
> > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > This patch adds support to walk through the free page blocks in the
> > > system and report them via a callback function. Some page blocks may
> > > leave the free list after zone->lock is released, so it is the caller's
> > > responsibility to either detect or prevent the use of such pages.
> > > 
> > > One use example of this patch is to accelerate live migration by skipping
> > > the transfer of free pages reported from the guest. A popular method used
> > > by the hypervisor to track which part of memory is written during live
> > > migration is to write-protect all the guest memory. So, those pages that
> > > are reported as free pages but are written after the report function
> > > returns will be captured by the hypervisor, and they will be added to the
> > > next round of memory transfer.
> > > 
> > > Signed-off-by: Wei Wang 
> > > Signed-off-by: Liang Li 
> > > Cc: Michal Hocko 
> > > Cc: Michael S. Tsirkin 
> > > Acked-by: Michal Hocko 
> > > ---
> > >  include/linux/mm.h |  6 
> > >  mm/page_alloc.c| 91
> > >  ++
> > >  2 files changed, 97 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ea818ff..b3077dd 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned
> > > long * zones_size,
> > >   unsigned long zone_start_pfn, unsigned long *zholes_size);
> > >  extern void free_initmem(void);
> > >  
> > > +extern void walk_free_mem_block(void *opaque,
> > > + int min_order,
> > > + bool (*report_pfn_range)(void *opaque,
> > > +  unsigned long pfn,
> > > +  unsigned long num));
> > > +
> > >  /*
> > >   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > >   * into the buddy system. The freed pages will be poisoned with pattern
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 76c9688..705de22 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, 
> > > nodemask_t
> > > *nodemask)
> > >   show_swap_cache_info();
> > >  }
> > >  
> > > +/*
> > > + * Walk through a free page list and report the found pfn range via the
> > > + * callback.
> > > + *
> > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > + * return true.
> > > + */
> > > +static bool walk_free_page_list(void *opaque,
> > > + struct zone *zone,
> > > + int order,
> > > + enum migratetype mt,
> > > + bool (*report_pfn_range)(void *,
> > > +  unsigned long,
> > > +  unsigned long))
> > > +{
> > > + struct page *page;
> > > + struct list_head *list;
> > > + unsigned long pfn, flags;
> > > + bool ret;
> > > +
> > > + spin_lock_irqsave(>lock, flags);
> > > + list = >free_area[order].free_list[mt];
> > > + list_for_each_entry(page, list, lru) {
> > > + pfn = page_to_pfn(page);
> > > + ret = report_pfn_range(opaque, pfn, 1 << order);
> > > + if (!ret)
> > > + break;
> > > + }
> > > + spin_unlock_irqrestore(>lock, flags);
> > > +
> > > + return ret;
> > > +}
> > 
> > There are two issues with this API. One is that it is not
> > restarteable: if you return false, you start from the
> > beginning. So no way to drop lock, do something slow
> > and then proceed.
> > 
> > Another is that you are using it to report free page hints. Presumably
> > the point is to drop these pages - keeping them near head of the list
> > and reusing the reported ones will just make everything slower
> > invalidating the hint.
> 
> I think that's where patches[1] by 'Nitesh' will help: This patch-set
> will send free page hints transparently to host and host decides to delete 
> such 
> pages.
>
> If I compare with patchset by 'Wei', host gets/asks free page hints and 
> ignore 
> such pages during live migration. But as already discussed, if free pages are 
> still in guest memory there is no point of traversing & getting all such pages
> again.
> 
> [1] https://www.spinics.net/lists/kvm/msg159790.html

The main difference between Wei's and Nitesh's patches add hints to page
alloc/free path. It's this more risky performance-wise: you need to
enable it at guest boot, not just at migration time.
Maybe the overhead isn't big, unfortunately no one posted any
numbers yet.


-- 
MST
___

Re: [PATCH v24 1/2] mm: support reporting free page blocks

2018-01-25 Thread Pankaj Gupta

> 
> On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > This patch adds support to walk through the free page blocks in the
> > system and report them via a callback function. Some page blocks may
> > leave the free list after zone->lock is released, so it is the caller's
> > responsibility to either detect or prevent the use of such pages.
> > 
> > One use example of this patch is to accelerate live migration by skipping
> > the transfer of free pages reported from the guest. A popular method used
> > by the hypervisor to track which part of memory is written during live
> > migration is to write-protect all the guest memory. So, those pages that
> > are reported as free pages but are written after the report function
> > returns will be captured by the hypervisor, and they will be added to the
> > next round of memory transfer.
> > 
> > Signed-off-by: Wei Wang 
> > Signed-off-by: Liang Li 
> > Cc: Michal Hocko 
> > Cc: Michael S. Tsirkin 
> > Acked-by: Michal Hocko 
> > ---
> >  include/linux/mm.h |  6 
> >  mm/page_alloc.c| 91
> >  ++
> >  2 files changed, 97 insertions(+)
> > 
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index ea818ff..b3077dd 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned
> > long * zones_size,
> > unsigned long zone_start_pfn, unsigned long *zholes_size);
> >  extern void free_initmem(void);
> >  
> > +extern void walk_free_mem_block(void *opaque,
> > +   int min_order,
> > +   bool (*report_pfn_range)(void *opaque,
> > +unsigned long pfn,
> > +unsigned long num));
> > +
> >  /*
> >   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> >   * into the buddy system. The freed pages will be poisoned with pattern
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 76c9688..705de22 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t
> > *nodemask)
> > show_swap_cache_info();
> >  }
> >  
> > +/*
> > + * Walk through a free page list and report the found pfn range via the
> > + * callback.
> > + *
> > + * Return false if the callback requests to stop reporting. Otherwise,
> > + * return true.
> > + */
> > +static bool walk_free_page_list(void *opaque,
> > +   struct zone *zone,
> > +   int order,
> > +   enum migratetype mt,
> > +   bool (*report_pfn_range)(void *,
> > +unsigned long,
> > +unsigned long))
> > +{
> > +   struct page *page;
> > +   struct list_head *list;
> > +   unsigned long pfn, flags;
> > +   bool ret;
> > +
> > +   spin_lock_irqsave(>lock, flags);
> > +   list = >free_area[order].free_list[mt];
> > +   list_for_each_entry(page, list, lru) {
> > +   pfn = page_to_pfn(page);
> > +   ret = report_pfn_range(opaque, pfn, 1 << order);
> > +   if (!ret)
> > +   break;
> > +   }
> > +   spin_unlock_irqrestore(>lock, flags);
> > +
> > +   return ret;
> > +}
> 
> There are two issues with this API. One is that it is not
> restarteable: if you return false, you start from the
> beginning. So no way to drop lock, do something slow
> and then proceed.
> 
> Another is that you are using it to report free page hints. Presumably
> the point is to drop these pages - keeping them near head of the list
> and reusing the reported ones will just make everything slower
> invalidating the hint.

I think that's where patches[1] by 'Nitesh' will help: This patch-set
will send free page hints transparently to host and host decides to delete such 
pages.

If I compare with patchset by 'Wei', host gets/asks free page hints and ignore 
such pages during live migration. But as already discussed, if free pages are 
still in guest memory there is no point of traversing & getting all such pages
again.

[1] https://www.spinics.net/lists/kvm/msg159790.html

> 
> How about rotating these pages towards the end of the list?
> Probably not on each call, callect reported pages and then
> move them to tail when we exit.
> 
> Of course it's possible not all reporters want this.
> So maybe change the callback to return int:
>  0 - page reported, move page to end of free list
>  > 0 - page skipped, proceed
>  < 0 - stop processing
> 
> 
> > +
> > +/**
> > + * walk_free_mem_block - Walk through the free page blocks in the system
> > + * @opaque: the context passed from the caller
> > + * @min_order: the minimum order of 

[PATCH net] vhost_net: stop device during reset owner

2018-01-25 Thread Jason Wang
We don't stop device before reset owner, this means we could try to
serve any virtqueue kick before reset dev->worker. This will result a
warn since the work was pending at llist during owner resetting. Fix
this by stopping device during owner reset.

Reported-by: syzbot+eb17c6162478cc506...@syzkaller.appspotmail.com
Fixes: 3a4d5c94e9593 ("vhost_net: a kernel-level virtio server")
Signed-off-by: Jason Wang 
---
 drivers/vhost/net.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index c7bdeb6..5636c7c 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1208,6 +1208,7 @@ static long vhost_net_reset_owner(struct vhost_net *n)
}
vhost_net_stop(n, _sock, _sock);
vhost_net_flush(n);
+   vhost_dev_stop(>dev);
vhost_dev_reset_owner(>dev, umem);
vhost_net_vq_reset(n);
 done:
-- 
2.7.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v25 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

2018-01-25 Thread Michael S. Tsirkin
On Thu, Jan 25, 2018 at 05:14:06PM +0800, Wei Wang wrote:
> Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
> support of reporting hints of guest free pages to host via virtio-balloon.
> 
> Host requests the guest to report free pages by sending a new cmd
> id to the guest via the free_page_report_cmd_id configuration register.
> 
> When the guest starts to report, the first element added to the free page
> vq is the cmd id given by host. When the guest finishes the reporting
> of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
> to the vq to tell host that the reporting is done. Host may also requests
> the guest to stop the reporting in advance by sending the stop cmd id to
> the guest via the configuration register.
> 
> Signed-off-by: Wei Wang 
> Signed-off-by: Liang Li 
> Cc: Michael S. Tsirkin 
> Cc: Michal Hocko 
> ---
>  drivers/virtio/virtio_balloon.c | 251 
> ++--
>  include/uapi/linux/virtio_balloon.h |   7 +
>  2 files changed, 222 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index a1fb52c..114985b 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -51,9 +51,22 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
>  static struct vfsmount *balloon_mnt;
>  #endif
>  
> +enum virtio_balloon_vq {
> + VIRTIO_BALLOON_VQ_INFLATE,
> + VIRTIO_BALLOON_VQ_DEFLATE,
> + VIRTIO_BALLOON_VQ_STATS,
> + VIRTIO_BALLOON_VQ_FREE_PAGE,
> + VIRTIO_BALLOON_VQ_MAX
> +};
> +
>  struct virtio_balloon {
>   struct virtio_device *vdev;
> - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
> + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
> +
> + /* Balloon's own wq for cpu-intensive work items */
> + struct workqueue_struct *balloon_wq;
> + /* The free page reporting work item submitted to the balloon wq */
> + struct work_struct report_free_page_work;
>  
>   /* The balloon servicing is delegated to a freezable workqueue. */
>   struct work_struct update_balloon_stats_work;
> @@ -63,6 +76,11 @@ struct virtio_balloon {
>   spinlock_t stop_update_lock;
>   bool stop_update;
>  
> + /* The new cmd id received from host */
> + uint32_t cmd_id_received;
> + /* The cmd id that is in use */
> + __virtio32 cmd_id_use;
> +
>   /* Waiting for host to ack the pages we released. */
>   wait_queue_head_t acked;
>  
> @@ -316,17 +334,6 @@ static void stats_handle_request(struct virtio_balloon 
> *vb)
>   virtqueue_kick(vq);
>  }
>  
> -static void virtballoon_changed(struct virtio_device *vdev)
> -{
> - struct virtio_balloon *vb = vdev->priv;
> - unsigned long flags;
> -
> - spin_lock_irqsave(>stop_update_lock, flags);
> - if (!vb->stop_update)
> - queue_work(system_freezable_wq, >update_balloon_size_work);
> - spin_unlock_irqrestore(>stop_update_lock, flags);
> -}
> -
>  static inline s64 towards_target(struct virtio_balloon *vb)
>  {
>   s64 target;
> @@ -343,6 +350,34 @@ static inline s64 towards_target(struct virtio_balloon 
> *vb)
>   return target - vb->num_pages;
>  }
>  
> +static void virtballoon_changed(struct virtio_device *vdev)
> +{
> + struct virtio_balloon *vb = vdev->priv;
> + unsigned long flags;
> + s64 diff = towards_target(vb);
> +
> + if (diff) {
> + spin_lock_irqsave(>stop_update_lock, flags);
> + if (!vb->stop_update)
> + queue_work(system_freezable_wq,
> +>update_balloon_size_work);
> + spin_unlock_irqrestore(>stop_update_lock, flags);
> + }
> +
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> + virtio_cread(vdev, struct virtio_balloon_config,
> +  free_page_report_cmd_id, >cmd_id_received);
> + if (vb->cmd_id_received !=
> + VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
> + spin_lock_irqsave(>stop_update_lock, flags);
> + if (!vb->stop_update)
> + queue_work(vb->balloon_wq,
> +>report_free_page_work);
> + spin_unlock_irqrestore(>stop_update_lock, flags);
> + }
> + }
> +}
> +
>  static void update_balloon_size(struct virtio_balloon *vb)
>  {
>   u32 actual = vb->num_pages;
> @@ -417,42 +452,151 @@ static void update_balloon_size_func(struct 
> work_struct *work)
>  
>  static int init_vqs(struct virtio_balloon *vb)
>  {
> - struct virtqueue *vqs[3];
> - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request 
> };
> - static const char * const names[] = { "inflate", "deflate", "stats" };
> - int err, nvqs;
> + 

Re: [PATCH v25 1/2] mm: support reporting free page blocks

2018-01-25 Thread Michael S. Tsirkin
On Thu, Jan 25, 2018 at 05:14:05PM +0800, Wei Wang wrote:
> This patch adds support to walk through the free page blocks in the
> system and report them via a callback function. Some page blocks may
> leave the free list after zone->lock is released, so it is the caller's
> responsibility to either detect or prevent the use of such pages.
> 
> One use example of this patch is to accelerate live migration by skipping
> the transfer of free pages reported from the guest. A popular method used
> by the hypervisor to track which part of memory is written during live
> migration is to write-protect all the guest memory. So, those pages that
> are reported as free pages but are written after the report function
> returns will be captured by the hypervisor, and they will be added to the
> next round of memory transfer.
> 
> Signed-off-by: Wei Wang 
> Signed-off-by: Liang Li 
> Cc: Michal Hocko 
> Cc: Michael S. Tsirkin 
> Acked-by: Michal Hocko 

Commented on v24 that this should be restartable. That comment
still applies.

> ---
>  include/linux/mm.h |  6 
>  mm/page_alloc.c| 96 
> ++
>  2 files changed, 102 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ea818ff..e65ae2e 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long 
> * zones_size,
>   unsigned long zone_start_pfn, unsigned long *zholes_size);
>  extern void free_initmem(void);
>  
> +extern int walk_free_mem_block(void *opaque,
> +int min_order,
> +int (*report_pfn_range)(void *opaque,
> +unsigned long pfn,
> +unsigned long num));
> +
>  /*
>   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>   * into the buddy system. The freed pages will be poisoned with pattern
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 76c9688..0f08039 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4899,6 +4899,102 @@ void show_free_areas(unsigned int filter, nodemask_t 
> *nodemask)
>   show_swap_cache_info();
>  }
>  
> +/*
> + * Walk through a free page list and report the found pfn range via the
> + * callback.
> + *
> + * Return 0 if it completes the reporting. Otherwise, return the Non-zero
> + * value returned from the callback.
> + */
> +static int walk_free_page_list(void *opaque,
> +struct zone *zone,
> +int order,
> +enum migratetype mt,
> +int (*report_pfn_range)(void *,
> +unsigned long,
> +unsigned long))
> +{
> + struct page *page;
> + struct list_head *list;
> + unsigned long pfn, flags;
> + bool ret = 0;
> +
> + spin_lock_irqsave(>lock, flags);
> + list = >free_area[order].free_list[mt];
> + list_for_each_entry(page, list, lru) {
> + pfn = page_to_pfn(page);
> + ret = report_pfn_range(opaque, pfn, 1 << order);
> + if (ret)
> + break;
> + }
> + spin_unlock_irqrestore(>lock, flags);
> +
> + return ret;
> +}
> +
> +/**
> + * walk_free_mem_block - Walk through the free page blocks in the system
> + * @opaque: the context passed from the caller
> + * @min_order: the minimum order of free lists to check
> + * @report_pfn_range: the callback to report the pfn range of the free pages
> + *
> + * If the callback returns a non-zero value, stop iterating the list of free
> + * page blocks. Otherwise, continue to report.
> + *
> + * Please note that there are no locking guarantees for the callback and
> + * that the reported pfn range might be freed or disappear after the
> + * callback returns so the caller has to be very careful how it is used.
> + *
> + * The callback itself must not sleep or perform any operations which would
> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> + * or via any lock dependency. It is generally advisable to implement
> + * the callback as simple as possible and defer any heavy lifting to a
> + * different context.
> + *
> + * There is no guarantee that each free range will be reported only once
> + * during one walk_free_mem_block invocation.
> + *
> + * pfn_to_page on the given range is strongly discouraged and if there is
> + * an absolute need for that make sure to contact MM people to discuss
> + * potential problems.
> + *
> + * The function itself might sleep so it cannot be called from atomic
> + * contexts.
> + *
> + * In general low orders tend to be very volatile and so it makes more
> + * sense to 

Re: [PATCH v24 1/2] mm: support reporting free page blocks

2018-01-25 Thread Michael S. Tsirkin
On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> This patch adds support to walk through the free page blocks in the
> system and report them via a callback function. Some page blocks may
> leave the free list after zone->lock is released, so it is the caller's
> responsibility to either detect or prevent the use of such pages.
> 
> One use example of this patch is to accelerate live migration by skipping
> the transfer of free pages reported from the guest. A popular method used
> by the hypervisor to track which part of memory is written during live
> migration is to write-protect all the guest memory. So, those pages that
> are reported as free pages but are written after the report function
> returns will be captured by the hypervisor, and they will be added to the
> next round of memory transfer.
> 
> Signed-off-by: Wei Wang 
> Signed-off-by: Liang Li 
> Cc: Michal Hocko 
> Cc: Michael S. Tsirkin 
> Acked-by: Michal Hocko 
> ---
>  include/linux/mm.h |  6 
>  mm/page_alloc.c| 91 
> ++
>  2 files changed, 97 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ea818ff..b3077dd 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long 
> * zones_size,
>   unsigned long zone_start_pfn, unsigned long *zholes_size);
>  extern void free_initmem(void);
>  
> +extern void walk_free_mem_block(void *opaque,
> + int min_order,
> + bool (*report_pfn_range)(void *opaque,
> +  unsigned long pfn,
> +  unsigned long num));
> +
>  /*
>   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>   * into the buddy system. The freed pages will be poisoned with pattern
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 76c9688..705de22 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t 
> *nodemask)
>   show_swap_cache_info();
>  }
>  
> +/*
> + * Walk through a free page list and report the found pfn range via the
> + * callback.
> + *
> + * Return false if the callback requests to stop reporting. Otherwise,
> + * return true.
> + */
> +static bool walk_free_page_list(void *opaque,
> + struct zone *zone,
> + int order,
> + enum migratetype mt,
> + bool (*report_pfn_range)(void *,
> +  unsigned long,
> +  unsigned long))
> +{
> + struct page *page;
> + struct list_head *list;
> + unsigned long pfn, flags;
> + bool ret;
> +
> + spin_lock_irqsave(>lock, flags);
> + list = >free_area[order].free_list[mt];
> + list_for_each_entry(page, list, lru) {
> + pfn = page_to_pfn(page);
> + ret = report_pfn_range(opaque, pfn, 1 << order);
> + if (!ret)
> + break;
> + }
> + spin_unlock_irqrestore(>lock, flags);
> +
> + return ret;
> +}

There are two issues with this API. One is that it is not
restarteable: if you return false, you start from the
beginning. So no way to drop lock, do something slow
and then proceed.

Another is that you are using it to report free page hints. Presumably
the point is to drop these pages - keeping them near head of the list
and reusing the reported ones will just make everything slower
invalidating the hint.

How about rotating these pages towards the end of the list?
Probably not on each call, callect reported pages and then
move them to tail when we exit.

Of course it's possible not all reporters want this.
So maybe change the callback to return int:
 0 - page reported, move page to end of free list
 > 0 - page skipped, proceed
 < 0 - stop processing


> +
> +/**
> + * walk_free_mem_block - Walk through the free page blocks in the system
> + * @opaque: the context passed from the caller
> + * @min_order: the minimum order of free lists to check
> + * @report_pfn_range: the callback to report the pfn range of the free pages
> + *
> + * If the callback returns false, stop iterating the list of free page 
> blocks.
> + * Otherwise, continue to report.
> + *
> + * Please note that there are no locking guarantees for the callback and
> + * that the reported pfn range might be freed or disappear after the
> + * callback returns so the caller has to be very careful how it is used.
> + *
> + * The callback itself must not sleep or perform any operations which would
> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> + * or 

Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

2018-01-25 Thread Wei Wang

On 01/25/2018 07:28 PM, Tetsuo Handa wrote:

On 2018/01/25 12:32, Wei Wang wrote:

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:

On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
+
+static void report_free_page_func(struct work_struct *work)
+{
+struct virtio_balloon *vb;
+unsigned long flags;
+
+vb = container_of(work, struct virtio_balloon, report_free_page_work);
+
+/* Start by sending the obtained cmd id to the host with an outbuf */
+send_cmd_id(vb, >start_cmd_id);
+
+/*
+ * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
+ * indicate a new request can be queued.
+ */
+spin_lock_irqsave(>stop_update_lock, flags);
+vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
+VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+spin_unlock_irqrestore(>stop_update_lock, flags);
+
+walk_free_mem_block(vb, 0, _balloon_send_free_pages);
Can you teach walk_free_mem_block to return the && of all
return calls, so caller knows whether it completed?

There will be two cases that can cause walk_free_mem_block to return without 
completing:
1) host requests to stop in advance
2) vq->broken

How about letting walk_free_mem_block simply return the value returned by its 
callback (i.e. virtio_balloon_send_free_pages)?

For host requests to stop, it returns "1", and the above only bails out when 
walk_free_mem_block return a "< 0" value.

I feel that virtio_balloon_send_free_pages is doing too heavy things.

It can be called for many times with IRQ disabled. Number of times
it is called depends on amount of free pages (and fragmentation state).
Generally, more free pages, more calls.

Then, why don't you allocate some pages for holding all pfn values
and then call walk_free_mem_block() only for storing pfn values
and then send pfn values without disabling IRQ?


We have actually tried many methods for this feature before, and what 
you suggested is one of them, and you could also find the related 
discussion in earlier versions. In addition to the complexity of that 
method (if thinking deeper along that line), I can share the performance 
(the live migration time) comparison of that method with this one in 
this patch: ~405ms vs. ~260 ms.


The things that you worried about have also been discussed actually. The 
strategy is that we start with something fundamental and increase 
incrementally (if you check earlier versions, we also have a method 
which makes the lock finer granularity, but we decided to leave this to 
the future improvement for prudence purpose). If possible, please let 
Michael review this patch, he already knows all those things. We will 
finish this feature as soon as possible, and then discuss with you about 
another one if you want. Thanks.


Best,
Wei



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

2018-01-25 Thread Tetsuo Handa
On 2018/01/25 12:32, Wei Wang wrote:
> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>> +
>> +static void report_free_page_func(struct work_struct *work)
>> +{
>> +struct virtio_balloon *vb;
>> +unsigned long flags;
>> +
>> +vb = container_of(work, struct virtio_balloon, report_free_page_work);
>> +
>> +/* Start by sending the obtained cmd id to the host with an outbuf */
>> +send_cmd_id(vb, >start_cmd_id);
>> +
>> +/*
>> + * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
>> + * indicate a new request can be queued.
>> + */
>> +spin_lock_irqsave(>stop_update_lock, flags);
>> +vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
>> +VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
>> +spin_unlock_irqrestore(>stop_update_lock, flags);
>> +
>> +walk_free_mem_block(vb, 0, _balloon_send_free_pages);
>> Can you teach walk_free_mem_block to return the && of all
>> return calls, so caller knows whether it completed?
> 
> There will be two cases that can cause walk_free_mem_block to return without 
> completing:
> 1) host requests to stop in advance
> 2) vq->broken
> 
> How about letting walk_free_mem_block simply return the value returned by its 
> callback (i.e. virtio_balloon_send_free_pages)?
> 
> For host requests to stop, it returns "1", and the above only bails out when 
> walk_free_mem_block return a "< 0" value.

I feel that virtio_balloon_send_free_pages is doing too heavy things.

It can be called for many times with IRQ disabled. Number of times
it is called depends on amount of free pages (and fragmentation state).
Generally, more free pages, more calls.

Then, why don't you allocate some pages for holding all pfn values
and then call walk_free_mem_block() only for storing pfn values
and then send pfn values without disabling IRQ?
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [virtio-dev] [PATCH v25 1/2 RESEND] mm: support reporting free page blocks

2018-01-25 Thread Wei Wang

Hi Michal,

On 01/25/2018 05:38 PM, Wei Wang wrote:

This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after zone->lock is released, so it is the caller's
responsibility to either detect or prevent the use of such pages.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are reported as free pages but are written after the report function
returns will be captured by the hypervisor, and they will be added to the
next round of memory transfer.

Signed-off-by: Wei Wang 
Signed-off-by: Liang Li 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Acked-by: Michal Hocko 
---
  include/linux/mm.h |  6 
  mm/page_alloc.c| 96 ++
  2 files changed, 102 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff..e65ae2e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * 
zones_size,
unsigned long zone_start_pfn, unsigned long *zholes_size);
  extern void free_initmem(void);
  
+extern int walk_free_mem_block(void *opaque,

+  int min_order,
+  int (*report_pfn_range)(void *opaque,
+  unsigned long pfn,
+  unsigned long num));
+
  /*
   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
   * into the buddy system. The freed pages will be poisoned with pattern
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688..eda587f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4899,6 +4899,102 @@ void show_free_areas(unsigned int filter, nodemask_t 
*nodemask)
show_swap_cache_info();
  }
  
+/*

+ * Walk through a free page list and report the found pfn range via the
+ * callback.
+ *
+ * Return 0 if it completes the reporting. Otherwise, return the Non-zero
+ * value returned from the callback.
+ */
+static int walk_free_page_list(void *opaque,
+  struct zone *zone,
+  int order,
+  enum migratetype mt,
+  int (*report_pfn_range)(void *,
+  unsigned long,
+  unsigned long))
+{
+   struct page *page;
+   struct list_head *list;
+   unsigned long pfn, flags;
+   int ret = 0;
+
+   spin_lock_irqsave(>lock, flags);
+   list = >free_area[order].free_list[mt];
+   list_for_each_entry(page, list, lru) {
+   pfn = page_to_pfn(page);
+   ret = report_pfn_range(opaque, pfn, 1 << order);
+   if (ret)
+   break;
+   }
+   spin_unlock_irqrestore(>lock, flags);
+
+   return ret;
+}
+
+/**
+ * walk_free_mem_block - Walk through the free page blocks in the system
+ * @opaque: the context passed from the caller
+ * @min_order: the minimum order of free lists to check
+ * @report_pfn_range: the callback to report the pfn range of the free pages
+ *
+ * If the callback returns a non-zero value, stop iterating the list of free
+ * page blocks. Otherwise, continue to report.
+ *
+ * Please note that there are no locking guarantees for the callback and
+ * that the reported pfn range might be freed or disappear after the
+ * callback returns so the caller has to be very careful how it is used.
+ *
+ * The callback itself must not sleep or perform any operations which would
+ * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
+ * or via any lock dependency. It is generally advisable to implement
+ * the callback as simple as possible and defer any heavy lifting to a
+ * different context.
+ *
+ * There is no guarantee that each free range will be reported only once
+ * during one walk_free_mem_block invocation.
+ *
+ * pfn_to_page on the given range is strongly discouraged and if there is
+ * an absolute need for that make sure to contact MM people to discuss
+ * potential problems.
+ *
+ * The function itself might sleep so it cannot be called from atomic
+ * contexts.
+ *
+ * In general low orders tend to be very volatile and so it makes more
+ * sense to query larger ones first for various optimizations which like
+ * ballooning etc... This will reduce the overhead as well.
+ *
+ * Return 0 if it completes the reporting. Otherwise, return the non-zero
+ * value returned from the callback.
+ */
+int 

[PATCH v25 1/2 RESEND] mm: support reporting free page blocks

2018-01-25 Thread Wei Wang
This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after zone->lock is released, so it is the caller's
responsibility to either detect or prevent the use of such pages.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are reported as free pages but are written after the report function
returns will be captured by the hypervisor, and they will be added to the
next round of memory transfer.

Signed-off-by: Wei Wang 
Signed-off-by: Liang Li 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Acked-by: Michal Hocko 
---
 include/linux/mm.h |  6 
 mm/page_alloc.c| 96 ++
 2 files changed, 102 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff..e65ae2e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * 
zones_size,
unsigned long zone_start_pfn, unsigned long *zholes_size);
 extern void free_initmem(void);
 
+extern int walk_free_mem_block(void *opaque,
+  int min_order,
+  int (*report_pfn_range)(void *opaque,
+  unsigned long pfn,
+  unsigned long num));
+
 /*
  * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
  * into the buddy system. The freed pages will be poisoned with pattern
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688..eda587f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4899,6 +4899,102 @@ void show_free_areas(unsigned int filter, nodemask_t 
*nodemask)
show_swap_cache_info();
 }
 
+/*
+ * Walk through a free page list and report the found pfn range via the
+ * callback.
+ *
+ * Return 0 if it completes the reporting. Otherwise, return the Non-zero
+ * value returned from the callback.
+ */
+static int walk_free_page_list(void *opaque,
+  struct zone *zone,
+  int order,
+  enum migratetype mt,
+  int (*report_pfn_range)(void *,
+  unsigned long,
+  unsigned long))
+{
+   struct page *page;
+   struct list_head *list;
+   unsigned long pfn, flags;
+   int ret = 0;
+
+   spin_lock_irqsave(>lock, flags);
+   list = >free_area[order].free_list[mt];
+   list_for_each_entry(page, list, lru) {
+   pfn = page_to_pfn(page);
+   ret = report_pfn_range(opaque, pfn, 1 << order);
+   if (ret)
+   break;
+   }
+   spin_unlock_irqrestore(>lock, flags);
+
+   return ret;
+}
+
+/**
+ * walk_free_mem_block - Walk through the free page blocks in the system
+ * @opaque: the context passed from the caller
+ * @min_order: the minimum order of free lists to check
+ * @report_pfn_range: the callback to report the pfn range of the free pages
+ *
+ * If the callback returns a non-zero value, stop iterating the list of free
+ * page blocks. Otherwise, continue to report.
+ *
+ * Please note that there are no locking guarantees for the callback and
+ * that the reported pfn range might be freed or disappear after the
+ * callback returns so the caller has to be very careful how it is used.
+ *
+ * The callback itself must not sleep or perform any operations which would
+ * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
+ * or via any lock dependency. It is generally advisable to implement
+ * the callback as simple as possible and defer any heavy lifting to a
+ * different context.
+ *
+ * There is no guarantee that each free range will be reported only once
+ * during one walk_free_mem_block invocation.
+ *
+ * pfn_to_page on the given range is strongly discouraged and if there is
+ * an absolute need for that make sure to contact MM people to discuss
+ * potential problems.
+ *
+ * The function itself might sleep so it cannot be called from atomic
+ * contexts.
+ *
+ * In general low orders tend to be very volatile and so it makes more
+ * sense to query larger ones first for various optimizations which like
+ * ballooning etc... This will reduce the overhead as well.
+ *
+ * Return 0 if it completes the reporting. Otherwise, return the non-zero
+ * value returned from the callback.
+ */
+int walk_free_mem_block(void *opaque,
+   int 

Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

2018-01-25 Thread Wei Wang

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:

On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
  


What is this doing? Basically handling the case where vq is broken?
It's kind of ugly to tweak feature bits, most code assumes they never
change.  Please just return an error to caller instead and handle it
there.

You can then avoid sprinking the check for the feature bit
all over the code.



One thing I don't like about this one is that the previous request
will still try to run to completion.

And it all seems pretty complex.

How about:
- pass cmd id to a queued work
- queued work gets that cmd id, stores a copy and uses that,
   re-checking periodically - stop if cmd id changes:
   will replace  report_free_page too since that's set to
   stop.

This means you do not reuse the queued cmd id also
for the buffer - which is probably for the best.


Thanks for the suggestion. Please have a check how it's implemented in v25.
Just a little reminder that work queue has internally ensured that there 
is no re-entrant of the same queued function.


Best,
Wei
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v25 0/2] Virtio-balloon: support free page reporting

2018-01-25 Thread Wei Wang
This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

The second feature enables the optimization of the 1st round memory
transfer - the hypervisor can skip the transfer of guest free pages in the
1st round. It is not concerned that the memory pages are used after they
are given to the hypervisor as a hint of the free pages, because they will
be tracked by the hypervisor and transferred in the next round if they are
used and written.

ChangeLog:
v24->v25:
- mm: change walk_free_mem_block to return 0 (instead of true) on
  completing the report, and return a non-zero value from the
  callabck, which stops the reporting.
- virtio-balloon:
- use enum instead of define for VIRTIO_BALLOON_VQ_INFLATE etc.
- avoid __virtio_clear_bit when bailing out;
- a new method to avoid reporting the some cmd id to host twice
- destroy_workqueue can cancel free page work when the feature is
  negotiated;
- fail probe when the free page vq size is less than 2.
v23->v24:
- change feature name VIRTIO_BALLOON_F_FREE_PAGE_VQ to
  VIRTIO_BALLOON_F_FREE_PAGE_HINT
- kick when vq->num_free < half full, instead of "= half full"
- replace BUG_ON with bailing out
- check vb->balloon_wq in probe(), if null, bail out
- add a new feature bit for page poisoning
- solve the corner case that one cmd id being sent to host twice
v22->v23:
- change to kick the device when the vq is half-way full;
- open-code batch_free_page_sg into add_one_sg;
- change cmd_id from "uint32_t" to "__virtio32";
- reserver one entry in the vq for teh driver to send cmd_id, instead
  of busywaiting for an available entry;
- add "stop_update" check before queue_work for prudence purpose for
  now, will have a separate patch to discuss this flag check later;
- init_vqs: change to put some variables on stack to have simpler
  implementation;
- add destroy_workqueue(vb->balloon_wq);

v21->v22:
- add_one_sg: some code and comment re-arrangement
- send_cmd_id: handle a cornercase

For previous ChangeLog, please reference
https://lwn.net/Articles/743660/

Wei Wang (2):
  mm: support reporting free page blocks
  virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

 drivers/virtio/virtio_balloon.c | 251 ++--
 include/linux/mm.h  |   6 +
 include/uapi/linux/virtio_balloon.h |   7 +
 mm/page_alloc.c |  96 ++
 4 files changed, 324 insertions(+), 36 deletions(-)

-- 
2.7.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v25 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

2018-01-25 Thread Wei Wang
Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
support of reporting hints of guest free pages to host via virtio-balloon.

Host requests the guest to report free pages by sending a new cmd
id to the guest via the free_page_report_cmd_id configuration register.

When the guest starts to report, the first element added to the free page
vq is the cmd id given by host. When the guest finishes the reporting
of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
to the vq to tell host that the reporting is done. Host may also requests
the guest to stop the reporting in advance by sending the stop cmd id to
the guest via the configuration register.

Signed-off-by: Wei Wang 
Signed-off-by: Liang Li 
Cc: Michael S. Tsirkin 
Cc: Michal Hocko 
---
 drivers/virtio/virtio_balloon.c | 251 ++--
 include/uapi/linux/virtio_balloon.h |   7 +
 2 files changed, 222 insertions(+), 36 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index a1fb52c..114985b 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -51,9 +51,22 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
 static struct vfsmount *balloon_mnt;
 #endif
 
+enum virtio_balloon_vq {
+   VIRTIO_BALLOON_VQ_INFLATE,
+   VIRTIO_BALLOON_VQ_DEFLATE,
+   VIRTIO_BALLOON_VQ_STATS,
+   VIRTIO_BALLOON_VQ_FREE_PAGE,
+   VIRTIO_BALLOON_VQ_MAX
+};
+
 struct virtio_balloon {
struct virtio_device *vdev;
-   struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
+   struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
+
+   /* Balloon's own wq for cpu-intensive work items */
+   struct workqueue_struct *balloon_wq;
+   /* The free page reporting work item submitted to the balloon wq */
+   struct work_struct report_free_page_work;
 
/* The balloon servicing is delegated to a freezable workqueue. */
struct work_struct update_balloon_stats_work;
@@ -63,6 +76,11 @@ struct virtio_balloon {
spinlock_t stop_update_lock;
bool stop_update;
 
+   /* The new cmd id received from host */
+   uint32_t cmd_id_received;
+   /* The cmd id that is in use */
+   __virtio32 cmd_id_use;
+
/* Waiting for host to ack the pages we released. */
wait_queue_head_t acked;
 
@@ -316,17 +334,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
virtqueue_kick(vq);
 }
 
-static void virtballoon_changed(struct virtio_device *vdev)
-{
-   struct virtio_balloon *vb = vdev->priv;
-   unsigned long flags;
-
-   spin_lock_irqsave(>stop_update_lock, flags);
-   if (!vb->stop_update)
-   queue_work(system_freezable_wq, >update_balloon_size_work);
-   spin_unlock_irqrestore(>stop_update_lock, flags);
-}
-
 static inline s64 towards_target(struct virtio_balloon *vb)
 {
s64 target;
@@ -343,6 +350,34 @@ static inline s64 towards_target(struct virtio_balloon *vb)
return target - vb->num_pages;
 }
 
+static void virtballoon_changed(struct virtio_device *vdev)
+{
+   struct virtio_balloon *vb = vdev->priv;
+   unsigned long flags;
+   s64 diff = towards_target(vb);
+
+   if (diff) {
+   spin_lock_irqsave(>stop_update_lock, flags);
+   if (!vb->stop_update)
+   queue_work(system_freezable_wq,
+  >update_balloon_size_work);
+   spin_unlock_irqrestore(>stop_update_lock, flags);
+   }
+
+   if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+   virtio_cread(vdev, struct virtio_balloon_config,
+free_page_report_cmd_id, >cmd_id_received);
+   if (vb->cmd_id_received !=
+   VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
+   spin_lock_irqsave(>stop_update_lock, flags);
+   if (!vb->stop_update)
+   queue_work(vb->balloon_wq,
+  >report_free_page_work);
+   spin_unlock_irqrestore(>stop_update_lock, flags);
+   }
+   }
+}
+
 static void update_balloon_size(struct virtio_balloon *vb)
 {
u32 actual = vb->num_pages;
@@ -417,42 +452,151 @@ static void update_balloon_size_func(struct work_struct 
*work)
 
 static int init_vqs(struct virtio_balloon *vb)
 {
-   struct virtqueue *vqs[3];
-   vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request 
};
-   static const char * const names[] = { "inflate", "deflate", "stats" };
-   int err, nvqs;
+   struct virtqueue *vqs[VIRTIO_BALLOON_VQ_MAX];
+   vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_MAX];
+   const char *names[VIRTIO_BALLOON_VQ_MAX];
+   struct scatterlist sg;
+   int ret;
 

[PATCH v25 1/2] mm: support reporting free page blocks

2018-01-25 Thread Wei Wang
This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after zone->lock is released, so it is the caller's
responsibility to either detect or prevent the use of such pages.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are reported as free pages but are written after the report function
returns will be captured by the hypervisor, and they will be added to the
next round of memory transfer.

Signed-off-by: Wei Wang 
Signed-off-by: Liang Li 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Acked-by: Michal Hocko 
---
 include/linux/mm.h |  6 
 mm/page_alloc.c| 96 ++
 2 files changed, 102 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff..e65ae2e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * 
zones_size,
unsigned long zone_start_pfn, unsigned long *zholes_size);
 extern void free_initmem(void);
 
+extern int walk_free_mem_block(void *opaque,
+  int min_order,
+  int (*report_pfn_range)(void *opaque,
+  unsigned long pfn,
+  unsigned long num));
+
 /*
  * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
  * into the buddy system. The freed pages will be poisoned with pattern
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688..0f08039 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4899,6 +4899,102 @@ void show_free_areas(unsigned int filter, nodemask_t 
*nodemask)
show_swap_cache_info();
 }
 
+/*
+ * Walk through a free page list and report the found pfn range via the
+ * callback.
+ *
+ * Return 0 if it completes the reporting. Otherwise, return the Non-zero
+ * value returned from the callback.
+ */
+static int walk_free_page_list(void *opaque,
+  struct zone *zone,
+  int order,
+  enum migratetype mt,
+  int (*report_pfn_range)(void *,
+  unsigned long,
+  unsigned long))
+{
+   struct page *page;
+   struct list_head *list;
+   unsigned long pfn, flags;
+   bool ret = 0;
+
+   spin_lock_irqsave(>lock, flags);
+   list = >free_area[order].free_list[mt];
+   list_for_each_entry(page, list, lru) {
+   pfn = page_to_pfn(page);
+   ret = report_pfn_range(opaque, pfn, 1 << order);
+   if (ret)
+   break;
+   }
+   spin_unlock_irqrestore(>lock, flags);
+
+   return ret;
+}
+
+/**
+ * walk_free_mem_block - Walk through the free page blocks in the system
+ * @opaque: the context passed from the caller
+ * @min_order: the minimum order of free lists to check
+ * @report_pfn_range: the callback to report the pfn range of the free pages
+ *
+ * If the callback returns a non-zero value, stop iterating the list of free
+ * page blocks. Otherwise, continue to report.
+ *
+ * Please note that there are no locking guarantees for the callback and
+ * that the reported pfn range might be freed or disappear after the
+ * callback returns so the caller has to be very careful how it is used.
+ *
+ * The callback itself must not sleep or perform any operations which would
+ * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
+ * or via any lock dependency. It is generally advisable to implement
+ * the callback as simple as possible and defer any heavy lifting to a
+ * different context.
+ *
+ * There is no guarantee that each free range will be reported only once
+ * during one walk_free_mem_block invocation.
+ *
+ * pfn_to_page on the given range is strongly discouraged and if there is
+ * an absolute need for that make sure to contact MM people to discuss
+ * potential problems.
+ *
+ * The function itself might sleep so it cannot be called from atomic
+ * contexts.
+ *
+ * In general low orders tend to be very volatile and so it makes more
+ * sense to query larger ones first for various optimizations which like
+ * ballooning etc... This will reduce the overhead as well.
+ *
+ * Return 0 if it completes the reporting. Otherwise, return the non-zero
+ * value returned from the callback.
+ */
+int walk_free_mem_block(void *opaque,
+   int