Confirm
___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote: On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote: + +static void report_free_page_func(struct work_struct *work) +{ + struct virtio_balloon *vb; + unsigned long flags; + + vb = container_of(work, struct virtio_balloon, report_free_page_work); + + /* Start by sending the obtained cmd id to the host with an outbuf */ + send_cmd_id(vb, >start_cmd_id); + + /* +* Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to +* indicate a new request can be queued. +*/ + spin_lock_irqsave(>stop_update_lock, flags); + vb->start_cmd_id = cpu_to_virtio32(vb->vdev, + VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID); + spin_unlock_irqrestore(>stop_update_lock, flags); + + walk_free_mem_block(vb, 0, _balloon_send_free_pages); Can you teach walk_free_mem_block to return the && of all return calls, so caller knows whether it completed? There will be two cases that can cause walk_free_mem_block to return without completing: 1) host requests to stop in advance 2) vq->broken How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)? For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value. Best, Wei ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH net 1/2] vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
From: "Michael S. Tsirkin"Date: Wed, 24 Jan 2018 23:46:19 +0200 > On Wed, Jan 24, 2018 at 04:38:30PM -0500, David Miller wrote: >> From: Jason Wang >> Date: Tue, 23 Jan 2018 17:27:25 +0800 >> >> > We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to >> > hold mutexes of all virtqueues. This may confuse lockdep to report a >> > possible deadlock because of trying to hold locks belong to same >> > class. Switch to use mutex_lock_nested() to avoid false positive. >> > >> > Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API") >> > Reported-by: syzbot+dbb7c1161485e61b0...@syzkaller.appspotmail.com >> > Signed-off-by: Jason Wang >> >> Michael, I see you ACK'd this, meaning that you're OK with these two >> fixes going via my net tree? >> >> Thanks. > > Yes - this seems to be what Jason wanted (judging by the net > tag in the subject) and I'm fine with it. > Thanks a lot. Great, not a problem, done. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH net 1/2] vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
On Wed, Jan 24, 2018 at 04:38:30PM -0500, David Miller wrote: > From: Jason Wang> Date: Tue, 23 Jan 2018 17:27:25 +0800 > > > We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to > > hold mutexes of all virtqueues. This may confuse lockdep to report a > > possible deadlock because of trying to hold locks belong to same > > class. Switch to use mutex_lock_nested() to avoid false positive. > > > > Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API") > > Reported-by: syzbot+dbb7c1161485e61b0...@syzkaller.appspotmail.com > > Signed-off-by: Jason Wang > > Michael, I see you ACK'd this, meaning that you're OK with these two > fixes going via my net tree? > > Thanks. Yes - this seems to be what Jason wanted (judging by the net tag in the subject) and I'm fine with it. Thanks a lot. -- MST ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH net 1/2] vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
From: Jason WangDate: Tue, 23 Jan 2018 17:27:25 +0800 > We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to > hold mutexes of all virtqueues. This may confuse lockdep to report a > possible deadlock because of trying to hold locks belong to same > class. Switch to use mutex_lock_nested() to avoid false positive. > > Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API") > Reported-by: syzbot+dbb7c1161485e61b0...@syzkaller.appspotmail.com > Signed-off-by: Jason Wang Michael, I see you ACK'd this, meaning that you're OK with these two fixes going via my net tree? Thanks. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote: > Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the > support of reporting hints of guest free pages to host via virtio-balloon. > > Host requests the guest to report free pages by sending a new cmd > id to the guest via the free_page_report_cmd_id configuration register. > > When the guest starts to report, the first element added to the free page > vq is the cmd id given by host. When the guest finishes the reporting > of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added > to the vq to tell host that the reporting is done. Host may also requests > the guest to stop the reporting in advance by sending the stop cmd id to > the guest via the configuration register. > > Signed-off-by: Wei Wang> Signed-off-by: Liang Li > Cc: Michael S. Tsirkin > Cc: Michal Hocko > --- > drivers/virtio/virtio_balloon.c | 265 > +++- > include/uapi/linux/virtio_balloon.h | 7 + > 2 files changed, 236 insertions(+), 36 deletions(-) > > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c > index a1fb52c..4440873 100644 > --- a/drivers/virtio/virtio_balloon.c > +++ b/drivers/virtio/virtio_balloon.c > @@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); > static struct vfsmount *balloon_mnt; > #endif > > +/* The number of virtqueues supported by virtio-balloon */ > +#define VIRTIO_BALLOON_VQ_NUM4 > +#define VIRTIO_BALLOON_VQ_ID_INFLATE 0 > +#define VIRTIO_BALLOON_VQ_ID_DEFLATE 1 > +#define VIRTIO_BALLOON_VQ_ID_STATS 2 > +#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE 3 > + Please do an enum instead of defines. VQ_ID can be just VQ (it's not an ID, it's just the number). > struct virtio_balloon { > struct virtio_device *vdev; > - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; > + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq; > + > + /* Balloon's own wq for cpu-intensive work items */ > + struct workqueue_struct *balloon_wq; > + /* The free page reporting work item submitted to the balloon wq */ > + struct work_struct report_free_page_work; > > /* The balloon servicing is delegated to a freezable workqueue. */ > struct work_struct update_balloon_stats_work; > @@ -63,6 +75,13 @@ struct virtio_balloon { > spinlock_t stop_update_lock; > bool stop_update; > > + /* Start to report free pages */ > + bool report_free_page; > + /* Stores the cmd id given by host to start the free page reporting */ > + __virtio32 start_cmd_id; > + /* Stores STOP_ID as a sign to tell host that the reporting is done */ > + __virtio32 stop_cmd_id; > + > /* Waiting for host to ack the pages we released. */ > wait_queue_head_t acked; > > @@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct > virtio_balloon *vb) > return idx; > } > > +static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len) > +{ > + struct scatterlist sg; > + unsigned int unused; > + int ret = 0; > + > + sg_init_table(, 1); > + sg_set_page(, pfn_to_page(pfn), len, 0); > + > + /* Detach all the used buffers from the vq */ > + while (virtqueue_get_buf(vq, )) > + ; > + > + /* > + * Since this is an optimization feature, losing a couple of free > + * pages to report isn't important. We simply return without adding > + * the page if the vq is full. > + * We are adding one entry each time, which essentially results in no > + * memory allocation, so the GFP_KERNEL flag below can be ignored. > + * There is always one entry reserved for the cmd id to use. > + */ > + if (vq->num_free > 1) > + ret = virtqueue_add_inbuf(vq, , 1, vq, GFP_KERNEL); > + > + if (vq->num_free < virtqueue_get_vring_size(vq) / 2) > + virtqueue_kick(vq); > + > + return ret; > +} > + > +static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id) > +{ > + struct scatterlist sg; > + struct virtqueue *vq = vb->free_page_vq; > + > + if (unlikely(!virtio_has_feature(vb->vdev, > + VIRTIO_BALLOON_F_FREE_PAGE_HINT))) > + return; > + > + sg_init_one(, cmd_id, sizeof(*cmd_id)); > + > + if (virtqueue_add_outbuf(vq, , 1, vb, GFP_KERNEL)) > + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT); What is this doing? Basically handling the case where vq is broken? It's kind of ugly to tweak feature bits, most code assumes they never change. Please just return an error to caller instead and handle it there. You can then avoid sprinking the check for the feature bit all over the code. > + > + virtqueue_kick(vq); > +} > + > /* > * While most virtqueues communicate
Re: [virtio-dev] Re: [PATCH v22 2/3] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ
On 01/24/2018 12:29 PM, Michael S. Tsirkin wrote: On Mon, Jan 22, 2018 at 07:25:45PM +0800, Wei Wang wrote: On 01/19/2018 08:39 PM, Michael S. Tsirkin wrote: On Fri, Jan 19, 2018 at 11:44:21AM +0800, Wei Wang wrote: On 01/18/2018 12:44 AM, Michael S. Tsirkin wrote: On Wed, Jan 17, 2018 at 01:10:11PM +0800, Wei Wang wrote: + vb->start_cmd_id = cmd_id; + queue_work(vb->balloon_wq, >report_free_page_work); It seems that if a command was already queued (with a different id), this will result in new command id being sent to host twice, which will likely confuse the host. I think that case won't happen, because - the host sends a cmd id to the guest via the config, while the guest acks back the received cmd id via the virtqueue; - the guest ack back a cmd id only when a new cmd id is received from the host, that is the above check: if (cmd_id != vb->start_cmd_id) { --> the driver only queues the reporting work only when a new cmd id is received /* * Host requests to start the reporting by sending a * new cmd id. */ WRITE_ONCE(vb->report_free_page, true); vb->start_cmd_id = cmd_id; queue_work(vb->balloon_wq, >report_free_page_work); } So the same cmd id wouldn't queue the reporting work twice. Like this: vb->start_cmd_id = cmd_id; queue_work(vb->balloon_wq, >report_free_page_work); command id changes vb->start_cmd_id = cmd_id; work executes queue_work(vb->balloon_wq, >report_free_page_work); work executes again If we think about the whole working flow, I think this case couldn't happen: 1) device send cmd_id=1 to driver; 2) driver receives cmd_id=1 in the config and acks cmd_id=1 to the device via the vq; 3) device revives cmd_id=1; 4) device wants to stop the reporting by sending cmd_id=STOP; 5) driver receives cmd_id=STOP from the config, and acks cmd_id=STOP to the device via the vq; 6) device sends cmd_id=2 to driver; ... cmd_id=2 won't come after cmd_id=1, there will be a STOP cmd in between them (STOP won't queue the work). How about defining the correct device behavior in the spec: The device Should NOT send a second cmd id to the driver until a STOP cmd ack for the previous cmd id has been received from the guest. Best, Wei I think we should just fix races in the driver rather than introduce random restrictions in the device. If device wants to start a new sequence, it should be able to do just that without a complicated back and forth with several roundtrips through the driver. OK, I've fixed it in the new version, v24. Please have a check there. Thanks. (Other changes based on the comments on v23 have also been included) Best, Wei ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the support of reporting hints of guest free pages to host via virtio-balloon. Host requests the guest to report free pages by sending a new cmd id to the guest via the free_page_report_cmd_id configuration register. When the guest starts to report, the first element added to the free page vq is the cmd id given by host. When the guest finishes the reporting of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added to the vq to tell host that the reporting is done. Host may also requests the guest to stop the reporting in advance by sending the stop cmd id to the guest via the configuration register. Signed-off-by: Wei WangSigned-off-by: Liang Li Cc: Michael S. Tsirkin Cc: Michal Hocko --- drivers/virtio/virtio_balloon.c | 265 +++- include/uapi/linux/virtio_balloon.h | 7 + 2 files changed, 236 insertions(+), 36 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index a1fb52c..4440873 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); static struct vfsmount *balloon_mnt; #endif +/* The number of virtqueues supported by virtio-balloon */ +#define VIRTIO_BALLOON_VQ_NUM 4 +#define VIRTIO_BALLOON_VQ_ID_INFLATE 0 +#define VIRTIO_BALLOON_VQ_ID_DEFLATE 1 +#define VIRTIO_BALLOON_VQ_ID_STATS 2 +#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE 3 + struct virtio_balloon { struct virtio_device *vdev; - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq; + + /* Balloon's own wq for cpu-intensive work items */ + struct workqueue_struct *balloon_wq; + /* The free page reporting work item submitted to the balloon wq */ + struct work_struct report_free_page_work; /* The balloon servicing is delegated to a freezable workqueue. */ struct work_struct update_balloon_stats_work; @@ -63,6 +75,13 @@ struct virtio_balloon { spinlock_t stop_update_lock; bool stop_update; + /* Start to report free pages */ + bool report_free_page; + /* Stores the cmd id given by host to start the free page reporting */ + __virtio32 start_cmd_id; + /* Stores STOP_ID as a sign to tell host that the reporting is done */ + __virtio32 stop_cmd_id; + /* Waiting for host to ack the pages we released. */ wait_queue_head_t acked; @@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct virtio_balloon *vb) return idx; } +static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len) +{ + struct scatterlist sg; + unsigned int unused; + int ret = 0; + + sg_init_table(, 1); + sg_set_page(, pfn_to_page(pfn), len, 0); + + /* Detach all the used buffers from the vq */ + while (virtqueue_get_buf(vq, )) + ; + + /* +* Since this is an optimization feature, losing a couple of free +* pages to report isn't important. We simply return without adding +* the page if the vq is full. +* We are adding one entry each time, which essentially results in no +* memory allocation, so the GFP_KERNEL flag below can be ignored. +* There is always one entry reserved for the cmd id to use. +*/ + if (vq->num_free > 1) + ret = virtqueue_add_inbuf(vq, , 1, vq, GFP_KERNEL); + + if (vq->num_free < virtqueue_get_vring_size(vq) / 2) + virtqueue_kick(vq); + + return ret; +} + +static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id) +{ + struct scatterlist sg; + struct virtqueue *vq = vb->free_page_vq; + + if (unlikely(!virtio_has_feature(vb->vdev, +VIRTIO_BALLOON_F_FREE_PAGE_HINT))) + return; + + sg_init_one(, cmd_id, sizeof(*cmd_id)); + + if (virtqueue_add_outbuf(vq, , 1, vb, GFP_KERNEL)) + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT); + + virtqueue_kick(vq); +} + /* * While most virtqueues communicate guest-initiated requests to the hypervisor, * the stats queue operates in reverse. The driver initializes the virtqueue @@ -316,17 +382,6 @@ static void stats_handle_request(struct virtio_balloon *vb) virtqueue_kick(vq); } -static void virtballoon_changed(struct virtio_device *vdev) -{ - struct virtio_balloon *vb = vdev->priv; - unsigned long flags; - - spin_lock_irqsave(>stop_update_lock, flags); - if (!vb->stop_update) - queue_work(system_freezable_wq, >update_balloon_size_work); - spin_unlock_irqrestore(>stop_update_lock, flags); -} -
[PATCH v24 0/2] Virtio-balloon: support free page reporting
This patch series is separated from the previous "Virtio-balloon Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, implemented by this series enables the virtio-balloon driver to report hints of guest free pages to the host. It can be used to accelerate live migration of VMs. Here is an introduction of this usage: Live migration needs to transfer the VM's memory from the source machine to the destination round by round. For the 1st round, all the VM's memory is transferred. From the 2nd round, only the pieces of memory that were written by the guest (after the 1st round) are transferred. One method that is popularly used by the hypervisor to track which part of memory is written is to write-protect all the guest memory. The second feature enables the optimization of the 1st round memory transfer - the hypervisor can skip the transfer of guest free pages in the 1st round. It is not concerned that the memory pages are used after they are given to the hypervisor as a hint of the free pages, because they will be tracked by the hypervisor and transferred in the next round if they are used and written. ChangeLog: v23->v24: - change feature name VIRTIO_BALLOON_F_FREE_PAGE_VQ to VIRTIO_BALLOON_F_FREE_PAGE_HINT - kick when vq->num_free < half full, instead of "= half full" - replace BUG_ON with bailing out - check vb->balloon_wq in probe(), if null, bail out - add a new feature bit for page poisoning - solve the corner case that one cmd id being sent to host twice v22->v23: - change to kick the device when the vq is half-way full; - open-code batch_free_page_sg into add_one_sg; - change cmd_id from "uint32_t" to "__virtio32"; - reserver one entry in the vq for teh driver to send cmd_id, instead of busywaiting for an available entry; - add "stop_update" check before queue_work for prudence purpose for now, will have a separate patch to discuss this flag check later; - init_vqs: change to put some variables on stack to have simpler implementation; - add destroy_workqueue(vb->balloon_wq); v21->v22: - add_one_sg: some code and comment re-arrangement - send_cmd_id: handle a cornercase For previous ChangeLog, please reference https://lwn.net/Articles/743660/ Wei Wang (2): mm: support reporting free page blocks virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT drivers/virtio/virtio_balloon.c | 265 +++- include/linux/mm.h | 6 + include/uapi/linux/virtio_balloon.h | 7 + mm/page_alloc.c | 91 + 4 files changed, 333 insertions(+), 36 deletions(-) -- 2.7.4 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[PATCH v24 1/2] mm: support reporting free page blocks
This patch adds support to walk through the free page blocks in the system and report them via a callback function. Some page blocks may leave the free list after zone->lock is released, so it is the caller's responsibility to either detect or prevent the use of such pages. One use example of this patch is to accelerate live migration by skipping the transfer of free pages reported from the guest. A popular method used by the hypervisor to track which part of memory is written during live migration is to write-protect all the guest memory. So, those pages that are reported as free pages but are written after the report function returns will be captured by the hypervisor, and they will be added to the next round of memory transfer. Signed-off-by: Wei WangSigned-off-by: Liang Li Cc: Michal Hocko Cc: Michael S. Tsirkin Acked-by: Michal Hocko --- include/linux/mm.h | 6 mm/page_alloc.c| 91 ++ 2 files changed, 97 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index ea818ff..b3077dd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern void walk_free_mem_block(void *opaque, + int min_order, + bool (*report_pfn_range)(void *opaque, +unsigned long pfn, +unsigned long num)); + /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) * into the buddy system. The freed pages will be poisoned with pattern diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 76c9688..705de22 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) show_swap_cache_info(); } +/* + * Walk through a free page list and report the found pfn range via the + * callback. + * + * Return false if the callback requests to stop reporting. Otherwise, + * return true. + */ +static bool walk_free_page_list(void *opaque, + struct zone *zone, + int order, + enum migratetype mt, + bool (*report_pfn_range)(void *, +unsigned long, +unsigned long)) +{ + struct page *page; + struct list_head *list; + unsigned long pfn, flags; + bool ret; + + spin_lock_irqsave(>lock, flags); + list = >free_area[order].free_list[mt]; + list_for_each_entry(page, list, lru) { + pfn = page_to_pfn(page); + ret = report_pfn_range(opaque, pfn, 1 << order); + if (!ret) + break; + } + spin_unlock_irqrestore(>lock, flags); + + return ret; +} + +/** + * walk_free_mem_block - Walk through the free page blocks in the system + * @opaque: the context passed from the caller + * @min_order: the minimum order of free lists to check + * @report_pfn_range: the callback to report the pfn range of the free pages + * + * If the callback returns false, stop iterating the list of free page blocks. + * Otherwise, continue to report. + * + * Please note that there are no locking guarantees for the callback and + * that the reported pfn range might be freed or disappear after the + * callback returns so the caller has to be very careful how it is used. + * + * The callback itself must not sleep or perform any operations which would + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC) + * or via any lock dependency. It is generally advisable to implement + * the callback as simple as possible and defer any heavy lifting to a + * different context. + * + * There is no guarantee that each free range will be reported only once + * during one walk_free_mem_block invocation. + * + * pfn_to_page on the given range is strongly discouraged and if there is + * an absolute need for that make sure to contact MM people to discuss + * potential problems. + * + * The function itself might sleep so it cannot be called from atomic + * contexts. + * + * In general low orders tend to be very volatile and so it makes more + * sense to query larger ones first for various optimizations which like + * ballooning etc... This will reduce the overhead as well. + */ +void walk_free_mem_block(void *opaque, +int min_order, +bool (*report_pfn_range)(void *opaque, + unsigned long pfn,