RE: [PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
> -Original Message- > From: Paolo Bonzini [mailto:pbonz...@redhat.com] > Sent: Tuesday, July 4, 2017 5:24 PM > To: Liu, Changpeng ; virtualization@lists.linux- > foundation.org > Cc: stefa...@gmail.com; h...@lst.de; m...@redhat.com > Subject: Re: [PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver > > > > On 05/07/2017 10:44, Changpeng Liu wrote: > > Currently virtio-blk driver does not provide discard feature flag, so the > > filesystems which built on top of the block device will not send discard > > command. This is okay for HDD backend, but it will impact the performance > > for SSD backend. > > > > Add a feature flag VIRTIO_BLK_F_DISCARD and command > VIRTIO_BLK_T_DISCARD > > to extend exist virtio-blk protocol, define 16 bytes discard descriptor > > for each discard segment, the discard segment defination aligns with > > SCSI or NVM Express protocols, virtio-blk driver will support multi-range > > discard request as well. > > > > Signed-off-by: Changpeng Liu > > Please include a patch for the specification. Since we are at it, I Thanks Paolo, do you mean include a text file which describe the changes for the specification? > would like to have three operations defined using the same descriptor: > > - discard (SCSI UNMAP) > > - write zeroes (SCSI WRITE SAME without UNMAP flag) > > - write zeroes and possibly discard (SCSI WRITE SAME with UNMAP flag) > > The last two can use the same command VIRTIO_BLK_T_WRITE_ZEROES, using > the reserved field as a flags field. Will add write zeroes feature. > > Paolo > > > --- > > drivers/block/virtio_blk.c | 76 > +++-- > > include/uapi/linux/virtio_blk.h | 19 +++ > > 2 files changed, 92 insertions(+), 3 deletions(-) > > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > > index 0297ad7..8f0c614 100644 > > --- a/drivers/block/virtio_blk.c > > +++ b/drivers/block/virtio_blk.c > > @@ -172,10 +172,52 @@ static int virtblk_add_req(struct virtqueue *vq, > > struct > virtblk_req *vbr, > > return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC); > > } > > > > +static inline int virtblk_setup_discard(struct request *req) > > +{ > > + unsigned short segments = blk_rq_nr_discard_segments(req), n = 0; > > + u32 block_size = queue_logical_block_size(req->q); > > + struct virtio_blk_discard *range; > > + struct bio *bio; > > + > > + if (block_size < 512 || !block_size) > > + return -1; > > + > > + range = kmalloc_array(segments, sizeof(*range), GFP_ATOMIC); > > + if (!range) > > + return -1; > > + > > + __rq_for_each_bio(bio, req) { > > + u64 slba = (bio->bi_iter.bi_sector << 9) / block_size; > > + u32 nlb = bio->bi_iter.bi_size / block_size; > > + > > + range[n].reserved = cpu_to_le32(0); > > + range[n].nlba = cpu_to_le32(nlb); > > + range[n].slba = cpu_to_le64(slba); > > + n++; > > + } > > + > > + if (WARN_ON_ONCE(n != segments)) { > > + kfree(range); > > + return -1; > > + } > > + > > + req->special_vec.bv_page = virt_to_page(range); > > + req->special_vec.bv_offset = offset_in_page(range); > > + req->special_vec.bv_len = sizeof(*range) * segments; > > + req->rq_flags |= RQF_SPECIAL_PAYLOAD; > > + > > + return 0; > > +} > > + > > static inline void virtblk_request_done(struct request *req) > > { > > struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); > > > > + if (req->rq_flags & RQF_SPECIAL_PAYLOAD) { > > + kfree(page_address(req->special_vec.bv_page) + > > + req->special_vec.bv_offset); > > + } > > + > > switch (req_op(req)) { > > case REQ_OP_SCSI_IN: > > case REQ_OP_SCSI_OUT: > > @@ -237,6 +279,9 @@ static blk_status_t virtio_queue_rq(struct > blk_mq_hw_ctx *hctx, > > case REQ_OP_FLUSH: > > type = VIRTIO_BLK_T_FLUSH; > > break; > > + case REQ_OP_DISCARD: > > + type = VIRTIO_BLK_T_DISCARD; > > + break; > > case REQ_OP_SCSI_IN: > > case REQ_OP_SCSI_OUT: > > type = VIRTIO_BLK_T_SCSI_CMD; > > @@ -256,9 +301,15 @@ static blk_status_t virtio_queue_rq(struct > blk_mq_hw_ctx *hctx, > > > > blk_mq_start_request(req); > > > > + if (type == VIRTIO_BLK_T_DISCARD) { > > + err = virtblk_setup_discard(req); > > + if (err) > > + return BLK_STS_IOERR; > > + } > > + > > num = blk_rq_map_sg(hctx->queue, req, vbr->sg); > > if (num) { > > - if (rq_data_dir(req) == WRITE) > > + if (rq_data_dir(req) == WRITE || type == VIRTIO_BLK_T_DISCARD) > > vbr->out_hdr.type |= cpu_to_virtio32(vblk->vdev, > VIRTIO_BLK_T_OUT); > > else > > vbr->out_hdr.type |= cpu_to_virtio32(vblk->vdev, > VIRTIO_BLK_T_IN); > > @@ -767,6 +818,25 @@ static int virtblk_probe(struct virtio_device *vdev) > > if (!err && opt_io_size)
Re: [PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
On 05/07/2017 09:57, Liu, Changpeng wrote: >> Please include a patch for the specification. Since we are at it, I > Thanks Paolo, do you mean include a text file which describe the changes for > the specification? The specification is hosted in an svn (Subversion) repository at https://tools.oasis-open.org/version-control/svn/virtio. You can provide a patch and send it to virtio-comm...@lists.oasis-open.org. Thanks, Paolo >> would like to have three operations defined using the same descriptor: >> >> - discard (SCSI UNMAP) >> >> - write zeroes (SCSI WRITE SAME without UNMAP flag) >> >> - write zeroes and possibly discard (SCSI WRITE SAME with UNMAP flag) >> >> The last two can use the same command VIRTIO_BLK_T_WRITE_ZEROES, using >> the reserved field as a flags field. > > Will add write zeroes feature. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
On Wed, Jul 05, 2017 at 07:57:07AM +, Liu, Changpeng wrote: > > > > -Original Message- > > From: Paolo Bonzini [mailto:pbonz...@redhat.com] > > Sent: Tuesday, July 4, 2017 5:24 PM > > To: Liu, Changpeng ; virtualization@lists.linux- > > foundation.org > > Cc: stefa...@gmail.com; h...@lst.de; m...@redhat.com > > Subject: Re: [PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver > > > > > > > > On 05/07/2017 10:44, Changpeng Liu wrote: > > > Currently virtio-blk driver does not provide discard feature flag, so the > > > filesystems which built on top of the block device will not send discard > > > command. This is okay for HDD backend, but it will impact the performance > > > for SSD backend. > > > > > > Add a feature flag VIRTIO_BLK_F_DISCARD and command > > VIRTIO_BLK_T_DISCARD > > > to extend exist virtio-blk protocol, define 16 bytes discard descriptor > > > for each discard segment, the discard segment defination aligns with > > > SCSI or NVM Express protocols, virtio-blk driver will support multi-range > > > discard request as well. > > > > > > Signed-off-by: Changpeng Liu > > > > Please include a patch for the specification. Since we are at it, I > Thanks Paolo, do you mean include a text file which describe the changes for > the specification? Paolo answered that. But please also CC code patch to virtio-comm...@lists.oasis-open.org . This is a subscriber-only list so pls subscribe beforehand: https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=virtio#feedback -- MST ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [Intel-gfx] [PATCH v3 00/16] improve the fb_setcmap helper
On Wed, Jul 05, 2017 at 10:09:21AM +0200, Peter Rosin wrote: > On 2017-07-05 08:08, Daniel Vetter wrote: > > On Tue, Jul 04, 2017 at 12:36:56PM +0200, Peter Rosin wrote: > >> Hi! > >> > >> While trying to get CLUT support for the atmel_hlcdc driver, and > >> specifically for the emulated fbdev interface, I received some > >> push-back that my feeble in-driver attempts should be solved > >> by the core. This is my attempt to do it right. > >> > >> I have obviously not tested all of this with more than a compile, > >> but patches 1 through 5 are enough to make the atmel-hlcdc driver > >> do what I need. The rest is just lots of removals and cleanup made > >> possible by the improved core. > >> > >> Please test, I would not be surprised if I have fouled up some > >> bit-manipulation somewhere, or if I have misunderstood something > >> about atomics... > >> > >> Changes since v2: > >> - Added patch 1/16 which factors out pseudo-palette handling. > >> - Removed the if (cmap->start + cmap->len < cmap->start) > >> sanity check on the assumption that the fbdev core handles it. > >> - Added patch 4/16 which factors out atomic state and commit > >> handling from drm_atomic_helper_legacy_gamma_set to > >> drm_mode_gamma_set_ioctl. > >> - Do one atomic commit for all affected crtc. > >> - Removed a now obsolete note in include/drm/drm_crtc.h (ammended > >> the last patch). > >> - Cc list is getting long, so I have redused the list for the > >> individual patches. If you would like to get the full series > >> (or nothing at all) for the next round (if that is needed) just > >> say so. > > > > Is this still on top of my locking rework? I tried to apply patches 1-3, > > but there's minor conflicts ... > > -Daniel > > v3 has the same base as v2. I collected your locking rework sometime > after june 21, you have perhaps changed things since? I saw an update > of that dpms patch you Cc me, but figured there were no significant > changes that I needed to handle since I didn't get the full set > this time either. A bad assumption it seems... There's a difference between my own private patches, and the maintainer repo where stuff gets applied. But that explains why there was a conflict. I plan to merge my locking changes tomorrow (they're reviewed and ready now), I'll apply your patches after that. That should take care of the conflicts. Thanks, Daniel > > Anyway, the base I have for v3 (and v2) is linux next-20170621 plus > the following locking rework commits (in reverse order): > > Author: Thierry Reding > Date: Wed Jun 21 20:28:15 2017 +0200 > Subject: drm/hisilicon: Remove custom FB helper deferred setup > > Author: Thierry Reding > Date: Wed Jun 21 20:28:14 2017 +0200 > Subject: drm/exynos: Remove custom FB helper deferred setup > > Author: Thierry Reding > Date: Wed Jun 21 20:28:13 2017 +0200 > Subject: drm/fb-helper: Support deferred setup > > Author: Daniel Vetter > Date: Wed Jun 21 20:28:12 2017 +0200 > Subject: drm/fb-helper: Split dpms handling into legacy and atomic paths > > Author: Daniel Vetter > Date: Wed Jun 21 20:28:11 2017 +0200 > Subject: drm/fb-helper: Stop using mode_config.mutex for internals > > Author: Daniel Vetter > Date: Wed Jun 21 20:28:10 2017 +0200 > Subject: drm/fb-helper: Push locking into restore_fbdev_mode_atomic|legacy > > Author: Daniel Vetter > Date: Wed Jun 21 20:28:09 2017 +0200 > Subject: drm/fb-helper: Push locking into pan_display_atomic|legacy > > Author: Daniel Vetter > Date: Wed Jun 21 20:28:08 2017 +0200 > Subject: drm/fb-helper: Drop locking from the vsync wait ioctl code > > Author: Daniel Vetter > Date: Wed Jun 21 20:28:07 2017 +0200 > Subject: drm/fb-helper: Push locking in fb_is_bound > > Author: Thierry Reding > Date: Wed Jun 21 20:28:06 2017 +0200 > Subject: drm/fb-helper: Add top-level lock > > Author: Daniel Vetter > Date: Wed Jun 21 20:28:05 2017 +0200 > Subject: drm/i915: Drop FBDEV #ifdev in mst code > > Author: Thierry Reding > Date: Wed Jun 21 20:28:04 2017 +0200 > Subject: drm/fb-helper: Push down modeset lock into FB helpers > > Cheers, > peda > ___ > Intel-gfx mailing list > intel-...@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH net] virtio-net: unbreak cusmed packet for small buffer XDP
On Tue, Jul 04, 2017 at 08:20:00PM +0800, Jason Wang wrote: > > IIUC XDP generally refuses to attach if checksum offload > > is enabled. > > Any reason to do this? (Looks like I don't see any code for this) Some of it was covered here https://www.mail-archive.com/netdev@vger.kernel.org/msg162577.html -- MST ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization