Re: [PATCH] virtio_scsi: always read VPD pages for multiqueue too

2017-07-11 Thread Stefan Hajnoczi
On Wed, Jul 05, 2017 at 10:30:56AM +0200, Paolo Bonzini wrote:
> Multi-queue virtio-scsi uses a different scsi_host_template struct.
> Add the .device_alloc field there, too.
> 
> Fixes: 25d1d50e23275e141e3a3fe06c25a99f4c4bf4e0
> Cc: sta...@vger.kernel.org
> Cc: David Gibson <da...@gibson.dropbear.id.au>
> Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
> ---
>  drivers/scsi/virtio_scsi.c | 1 +
>  1 file changed, 1 insertion(+)

Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>


signature.asc
Description: PGP signature


Re: [PATCH] virtio_scsi: Always try to read VPD pages

2017-04-13 Thread Stefan Hajnoczi
On Thu, Apr 13, 2017 at 12:13:00PM +1000, David Gibson wrote:
> @@ -705,6 +706,28 @@ static int virtscsi_device_reset(struct scsi_cmnd *sc)
>   return virtscsi_tmf(vscsi, cmd);
>  }
>  
> +static int virtscsi_device_alloc(struct scsi_device *sdevice)
> +{
> + /*
> +  * Passed through SCSI targets (e.g. with qemu's 'scsi-block')
> +  * may have transfer limits which come from the host SCSI
> +  * controller something on the host side other than the target

s/controller something/controller or something/ ?

> +  * itself.
> +  *
> +  * To make this work properly, the hypervisor can adjust the
> +  * target's VPD information to advertise these limits.  But
> +  * for that to work, the guest has to look at the VPD pages,
> +  * which we won't do by default if it is an SPC-2 device, even
> +  * if it does actually support it.
> +  *
> +  * So, set the blist to always try to read the VPD pages.
> +  */
> + sdevice->sdev_bflags = BLIST_TRY_VPD_PAGES;
> +
> + return 0;
> +}

Looks good to me.  Not a SCSI expert but I checked
scsi_device_supports_vpd() callers and this seems sane.


signature.asc
Description: PGP signature


Re: [PATCH 10/10] virtio: enable endian checks for sparse builds

2016-12-07 Thread Stefan Hajnoczi
On Tue, Dec 06, 2016 at 05:41:05PM +0200, Michael S. Tsirkin wrote:
> __CHECK_ENDIAN__ isn't on by default presumably because
> it triggers too many sparse warnings for correct code.
> But virtio is now clean of these warnings, and
> we want to keep it this way - enable this for
> sparse builds.
> 
> Signed-off-by: Michael S. Tsirkin <m...@redhat.com>
> ---
> 
> It seems that there should be a better way to do it,
> but this works too.
> 
>  drivers/block/Makefile  | 1 +
>  drivers/char/Makefile   | 1 +
>  drivers/char/hw_random/Makefile | 2 ++
>  drivers/gpu/drm/virtio/Makefile | 1 +
>  drivers/net/Makefile| 3 +++
>  drivers/net/caif/Makefile   | 1 +
>  drivers/rpmsg/Makefile  | 1 +
>  drivers/s390/virtio/Makefile| 2 ++
>  drivers/scsi/Makefile   | 1 +
>  drivers/vhost/Makefile  | 1 +
>  drivers/virtio/Makefile | 3 +++
>  net/9p/Makefile | 1 +
>  net/packet/Makefile | 1 +
>  net/vmw_vsock/Makefile  | 2 ++
>  14 files changed, 21 insertions(+)

Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>


signature.asc
Description: PGP signature


Application error handling with write-back caching

2016-05-10 Thread Stefan Hajnoczi
SBC-3 4.15.3 Write caching says:

"If processing a write command results in logical block data in cache
that is different from the logical block data on the medium, then the
device server shall retain that logical block data in cache until a
write medium operation is performed using that logical block data."

Does "is performed" mean "completes successfully" or just "completes"?

If "is performed" just means "completes", maybe with an error, the
application would have to resubmit write requests and then try to flush
the write cache again.

I'm not aware of applications that keep acknowledged write data around
until the cache flush completion in order to retry writes.

Can anyone clarify the SBC spec on this point?

Thanks,
Stefan


signature.asc
Description: PGP signature


Re: [RFC 0/2] target: userspace pass-through backend

2014-07-14 Thread Stefan Hajnoczi
On Tue, Jul 1, 2014 at 9:11 PM, Andy Grover agro...@redhat.com wrote:
 Shaohua Li wrote an initial implementation of this, late last year[1].
 Starting from that, I started working on some alternate implementation
 choices, and ended up with something rather different.

 Please take a look and let me know what you think. Patch 1 is a
 design and overview doc, and patch 2 is the actual code, along with
 implementation rationale.

 Thanks -- Andy

 [1] http://thread.gmane.org/gmane.linux.scsi.target.devel/5044

 Andy Grover (2):
   target: Add documentation on the target userspace pass-through driver
   target: Add a user-passthrough backstore

  Documentation/target/tcmu-design.txt   |  210 +++
  drivers/target/Kconfig |5 +
  drivers/target/Makefile|1 +
  drivers/target/target_core_transport.c |4 +
  drivers/target/target_core_user.c  | 1078 
 
  drivers/target/target_core_user.h  |  126 
  6 files changed, 1424 insertions(+)
  create mode 100644 Documentation/target/tcmu-design.txt
  create mode 100644 drivers/target/target_core_user.c
  create mode 100644 drivers/target/target_core_user.h

Hi Andy,
Just wanted to let you know that a userspace backstore would
potentially be useful for QEMU.  QEMU supports a number of disk image
formats (VMDK, VHDX, qcow2, and more).  Make these available as SCSI
LUNs on the host or to remote SCSI initiators is cool.

We currently have a tool called qemu-nbd that exports disk images
using the Network Block Device protocol.  Your userspace backstore
provides other options like iSCSI target or loopback access on the
host.

I took a quick look at the patch and imagine it's not hard to hook up
to QEMU.  Looks promising!

Stefan
--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Fwd: Re: [PATCH v3 3/6] virtio-scsi: avoid cancelling uninitialized work items

2014-06-11 Thread Stefan Hajnoczi
On Wed, Jun 11, 2014 at 02:53:46PM +0200, Paolo Bonzini wrote:
  Messaggio originale 
 From: Christoph Hellwig h...@infradead.org
 To: Paolo Bonzini pbonz...@redhat.com
 Cc: linux-ker...@vger.kernel.org, linux-scsi@vger.kernel.org, h...@lst.de,
 jbottom...@parallels.com, venkate...@google.com
 Subject: Re: [PATCH v3 3/6] virtio-scsi: avoid cancelling uninitialized work 
 items
 Message-ID: 20140611124731.ga16...@infradead.org
 In-Reply-To: 1401881699-1456-4-git-send-email-pbonz...@redhat.com
 
 Can I get a second review on this one from anyone?

Reviewed-by: Stefan Hajnoczi stefa...@redhat.com

 On Wed, Jun 04, 2014 at 01:34:56PM +0200, Paolo Bonzini wrote:
  Calling the workqueue interface on uninitialized work items isn't a
  good idea even if they're zeroed. It's not failing catastrophically only
  through happy accidents.
  
  Signed-off-by: Paolo Bonzini pbonz...@redhat.com
  ---
   drivers/scsi/virtio_scsi.c | 4 +++-
   1 file changed, 3 insertions(+), 1 deletion(-)
  
  diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
  index f0b4cdbfceb0..d66c4ee2c774 100644
  --- a/drivers/scsi/virtio_scsi.c
  +++ b/drivers/scsi/virtio_scsi.c
  @@ -253,6 +253,8 @@ static void virtscsi_ctrl_done(struct virtqueue *vq)
  virtscsi_vq_done(vscsi, vscsi-ctrl_vq, virtscsi_complete_free);
   };
   
  +static void virtscsi_handle_event(struct work_struct *work);
  +
   static int virtscsi_kick_event(struct virtio_scsi *vscsi,
 struct virtio_scsi_event_node *event_node)
   {
  @@ -260,6 +262,7 @@ static int virtscsi_kick_event(struct virtio_scsi 
  *vscsi,
  struct scatterlist sg;
  unsigned long flags;
   
  +   INIT_WORK(event_node-work, virtscsi_handle_event);
  sg_init_one(sg, event_node-event, sizeof(struct virtio_scsi_event));
   
  spin_lock_irqsave(vscsi-event_vq.vq_lock, flags);
  @@ -377,7 +380,6 @@ static void virtscsi_complete_event(struct virtio_scsi 
  *vscsi, void *buf)
   {
  struct virtio_scsi_event_node *event_node = buf;
   
  -   INIT_WORK(event_node-work, virtscsi_handle_event);
  schedule_work(event_node-work);
   }
   
  -- 
  1.8.3.1
 


pgpDoUAVNFzxl.pgp
Description: PGP signature


Re: [Qemu-devel] Testing NPIV Feature with Qemu-KVM

2013-09-05 Thread Stefan Hajnoczi
On Mon, Sep 2, 2013 at 2:34 PM, chandrashekar shastri
cshas...@oc2505588478.ibm.com wrote:
 I am testing NPIV feature on upstream Qemu, I have configured the zone
 and able to see the created vport on the storage array.

 Since, I am learning on how to setup the NPIV, I haven't created the 
 different zone for
 the vport and the array, I just added in the existing zone.

 Now, how do pass the LUN to the qemu, from Dr. Hannes Reineckei mail thread I 
 got to know that lspci command on the host doesn't show the virtual HBA.

 I didn't understand why there is limitation on that and if I specify
 /usr/libexec/qemu-kvm -enable-kvm Fedora19 -m 3000 -smp 2 -net nic -net
 \ user -vnc 127.0.0.1:0 -drive if=scsi,file=/dev/sdj

 How do I make sure that qemu is using the virtual HBA or (vport)?

From my limited knowledge of NPIV, after you create the vport on the
host you'll have a new SCSI host which scans LUNs.  That means new
SCSI devices appear on the host.

You can use ls -al /sys/class/scsi_host to see the SCSI hosts that are active.

You can use virsh nodedev-list --tree to see the details of the devices.

This should help you find the NPIV LUNs which can be passed to QEMU.

Stefan
--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/5] virtio: add functions for piecewise addition of buffers

2012-12-19 Thread Stefan Hajnoczi
On Tue, Dec 18, 2012 at 01:32:48PM +0100, Paolo Bonzini wrote:
 +/**
 + * virtqueue_start_buf - start building buffer for the other end
 + * @vq: the struct virtqueue we're talking about.
 + * @buf: a struct keeping the state of the buffer
 + * @data: the token identifying the buffer.
 + * @count: the number of buffers that will be added

Perhaps count should be named count_bufs or num_bufs.

 + * @count_sg: the number of sg lists that will be added

What is the purpose of count_sg?

Stefan
--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 5/5] virtio-scsi: introduce multiqueue support

2012-12-19 Thread Stefan Hajnoczi
On Tue, Dec 18, 2012 at 01:32:52PM +0100, Paolo Bonzini wrote:
  struct virtio_scsi_target_state {
 - /* Never held at the same time as vq_lock.  */
 + /* This spinlock ever held at the same time as vq_lock.  */

s/ever/is never/
--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/5] virtio: add functions for piecewise addition of buffers

2012-12-19 Thread Stefan Hajnoczi
On Wed, Dec 19, 2012 at 1:04 PM, Paolo Bonzini pbonz...@redhat.com wrote:
 Il 19/12/2012 11:47, Stefan Hajnoczi ha scritto:
 On Tue, Dec 18, 2012 at 01:32:48PM +0100, Paolo Bonzini wrote:
 What is the purpose of count_sg?

 It is needed to decide whether to use an indirect or a direct buffer.
 The idea is to avoid a memory allocation if the driver is providing us
 with separate sg elements (under the assumption that they will be few).

Ah, this makes sense now.  I saw it affects the decision whether to go
indirect or not but it wasn't obvious why.

 Originally I wanted to use a mix of direct and indirect buffer (direct
 if add_buf received a one-element scatterlist, otherwise indirect).  It
 would have had the same effect, without having to specify count_sg in
 advance.  The spec is not clear if that is allowed or not, but in the
 end they do not work with either QEMU or vhost, so I chose this
 alternative instead.

Okay.

Stefan
--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/5] Multiqueue virtio-scsi

2012-08-30 Thread Stefan Hajnoczi
On Tue, Aug 28, 2012 at 01:54:12PM +0200, Paolo Bonzini wrote:
 this series adds multiqueue support to the virtio-scsi driver, based
 on Jason Wang's work on virtio-net.  It uses a simple queue steering
 algorithm that expects one queue per CPU.  LUNs in the same target always
 use the same queue (so that commands are not reordered); queue switching
 occurs when the request being queued is the only one for the target.
 Also based on Jason's patches, the virtqueue affinity is set so that
 each CPU is associated to one virtqueue.

Reviewed-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio-scsi - vhost multi lun/adapter performance results with 3.6-rc0

2012-08-11 Thread Stefan Hajnoczi
On Sat, Aug 11, 2012 at 12:23 AM, Nicholas A. Bellinger
n...@linux-iscsi.org wrote:
 Using a KVM guest with 32x vCPUs and 4G memory, the results for 4x
 random I/O now look like:

 workload | jobs | 25% write / 75% read | 75% write / 25% read
 -|--|--|-
 1x rd_mcp LUN|   8  | ~155K IOPs   |  ~145K IOPs
 16x rd_mcp LUNs  |  16  | ~315K IOPs   |  ~305K IOPs
 32x rd_mcp LUNs  |  16  | ~425K IOPs   |  ~410K IOPs

 The full fio randrw results for the six test cases are attached below.
 Also, using a workload of fio numjobs  16 currently makes performance
 start to fall off pretty sharply regardless of the number of vCPUs..

 So running a similar workload with loopback SCSI ports on bare-metal
 produces ~1M random IOPs with 12x LUNs + numjobs=32.  At numjobs=16 here
 with vhost the 16x LUN configuration ends up being in the range of ~310K
 IOPs for the current sweet spot..

This makes me wonder what a comparison against baremetal looks like
and the perf top, mpstat, and kvm_stat results on the host.

Stefan
--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] scsi: virtio-scsi: Fix address translation failure of HighMem pages used by sg list

2012-07-25 Thread Stefan Hajnoczi
On Wed, Jul 25, 2012 at 04:00:19PM +0800, Wang Sen wrote:
 When using the commands below to write some data to a virtio-scsi LUN of the 
 QEMU guest(32-bit) with 1G physical memory(qemu -m 1024), the qemu will crash.
 
   # sudo mkfs.ext4 /dev/sdb  (/dev/sdb is the virtio-scsi LUN.)
   # sudo mount /dev/sdb /mnt
   # dd if=/dev/zero of=/mnt/file bs=1M count=1024
 
 In current implementation, sg_set_buf is called to add buffers to sg list 
 which
 is put into the virtqueue eventually. But there are some HighMem pages in 
 table-sgl can not get virtual address by sg_virt. So, sg_virt(sg_elem) may
 return NULL value. This will cause QEMU exit when virtqueue_map_sg is called 
 in QEMU because an invalid GPA is passed by virtqueue.
 
 My solution is using sg_set_page instead of sg_set_buf.
 
 I have tested the patch on my workstation. QEMU would not crash any more.
 
 Signed-off-by: Wang Sen senw...@linux.vnet.ibm.com
 ---
  drivers/scsi/virtio_scsi.c |3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

Reviewed-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com

--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] scsi: virtio-scsi: Fix address translation failure of HighMem pages used by sg list

2012-07-25 Thread Stefan Hajnoczi
On Wed, Jul 25, 2012 at 10:44:14AM +0200, Paolo Bonzini wrote:
 Il 25/07/2012 10:29, Wang Sen ha scritto:
  When using the commands below to write some data to a virtio-scsi LUN of 
  the 
  QEMU guest(32-bit) with 1G physical memory(qemu -m 1024), the qemu will 
  crash.
  
  # sudo mkfs.ext4 /dev/sdb  (/dev/sdb is the virtio-scsi LUN.)
  # sudo mount /dev/sdb /mnt
  # dd if=/dev/zero of=/mnt/file bs=1M count=1024
  
  In current implementation, sg_set_buf is called to add buffers to sg list 
  which
  is put into the virtqueue eventually. But there are some HighMem pages in 
  table-sgl can not get virtual address by sg_virt. So, sg_virt(sg_elem) may
  return NULL value. This will cause QEMU exit when virtqueue_map_sg is 
  called 
  in QEMU because an invalid GPA is passed by virtqueue.
 
 Heh, I was compiling (almost) the same patch as we speak. :)
 
 I've never seen QEMU crash; the VM would more likely just fail to boot
 with a panic.  But it's the same bug anyway.

It's not a segfault crash, I think it hits an abort(3) in QEMU's
virtio code when trying to map an invalid guest physical address.

Stefan

--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] tcm_vhost: Expose ABI version via VHOST_SCSI_GET_ABI_VERSION

2012-07-25 Thread Stefan Hajnoczi
On Tue, Jul 24, 2012 at 01:45:24PM -0700, Nicholas A. Bellinger wrote:
 On Mon, 2012-07-23 at 18:56 -0700, Greg Kroah-Hartman wrote:
  On Tue, Jul 24, 2012 at 01:26:20AM +, Nicholas A. Bellinger wrote:
   From: Nicholas Bellinger n...@linux-iscsi.org
   
   As requested by Anthony, here is a patch against 
   target-pending/for-next-merge
   to expose an ABI version to userspace via a new VHOST_SCSI_GET_ABI_VERSION
   ioctl operation.
   
   As mentioned in the comment, ABI Rev 0 is for pre 2012 out-of-tree code, 
   and
   ABI Rev 1 (the current rev) is for current WIP v3.6 kernel merge candiate 
   code.
   
   I think this is what you had in mind, and hopefully it will make MST 
   happy too.
   The incremental vhost-scsi patches against Zhi's QEMU are going out 
   shortly ahead
   of cutting a new vhost-scsi RFC over the next days.
   
   Please have a look and let me know if you have any concerns here.
   
   Thanks!
   
   Reported-by: Anthony Liguori aligu...@us.ibm.com
   Cc: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
   Cc: Michael S. Tsirkin m...@redhat.com
   Cc: Paolo Bonzini pbonz...@redhat.com
   Cc: Zhi Yong Wu wu...@linux.vnet.ibm.com
   Signed-off-by: Nicholas Bellinger n...@linux-iscsi.org
   ---
drivers/vhost/tcm_vhost.c |9 +
drivers/vhost/tcm_vhost.h |   11 +++
2 files changed, 20 insertions(+), 0 deletions(-)
   
 
 SNIP
 
   diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h
   index e942df9..3d5378f 100644
   --- a/drivers/vhost/tcm_vhost.h
   +++ b/drivers/vhost/tcm_vhost.h
   @@ -80,7 +80,17 @@ struct tcm_vhost_tport {

#include linux/vhost.h

   +/*
   + * Used by QEMU userspace to ensure a consistent vhost-scsi ABI.
   + *
   + * ABI Rev 0: All pre 2012 revisions used by prototype out-of-tree code
   + * ABI Rev 1: 2012 version for v3.6 kernel merge candiate
   + */
   +
   +#define VHOST_SCSI_ABI_VERSION   1
   +
struct vhost_scsi_target {
   + int abi_version;
 unsigned char vhost_wwpn[TRANSPORT_IQN_LEN];
 unsigned short vhost_tpgt;
};
   @@ -88,3 +98,4 @@ struct vhost_scsi_target {
/* VHOST_SCSI specific defines */
#define VHOST_SCSI_SET_ENDPOINT _IOW(VHOST_VIRTIO, 0x40, struct 
   vhost_scsi_target)
#define VHOST_SCSI_CLEAR_ENDPOINT _IOW(VHOST_VIRTIO, 0x41, struct 
   vhost_scsi_target)
   +#define VHOST_SCSI_GET_ABI_VERSION _IOW(VHOST_VIRTIO, 0x42, struct 
   vhost_scsi_target)
  
  No, you just broke the ABI for version 0 here, that's not how you do
  this at all.
  
 
 The intention of this patch is use ABI=1 as a starting point for
 tcm_vhost moving forward, with no back-wards compat for the ABI=0
 prototype userspace code because:
 
 - It's based on a slightly older version of QEMU (updating the QEMU series 
 now)
 - It does not have an GET_ABI_VERSION ioctl cmd (that starts with ABI=1)
 - It has a small user-base of target + virtio-scsi developers
 
 So I did consider just starting from ABI=0, but figured this would help
 reduce the confusion for QEMU userspace wrt to the vhost-scsi code
 that's been floating around out-of-tree for the last 2 years.

There is no real user base beyond the handful of people who have hacked
on this.  Adding the GET_ABI_VERSION ioctl() at this stage is fine,
especially considering that the userspace code that talks to tcm_vhost
isn't in mainline in userspace yet either.

Stefan

--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] tcm_vhost: Expose ABI version via VHOST_SCSI_GET_ABI_VERSION

2012-07-25 Thread Stefan Hajnoczi
On Wed, Jul 25, 2012 at 02:14:50PM -0700, Nicholas A. Bellinger wrote:
 On Wed, 2012-07-25 at 12:55 +0100, Stefan Hajnoczi wrote:
  On Tue, Jul 24, 2012 at 01:45:24PM -0700, Nicholas A. Bellinger wrote:
   On Mon, 2012-07-23 at 18:56 -0700, Greg Kroah-Hartman wrote:
On Tue, Jul 24, 2012 at 01:26:20AM +, Nicholas A. Bellinger wrote:
 From: Nicholas Bellinger n...@linux-iscsi.org
 
 SNIP
 
   
 diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h
 index e942df9..3d5378f 100644
 --- a/drivers/vhost/tcm_vhost.h
 +++ b/drivers/vhost/tcm_vhost.h
 @@ -80,7 +80,17 @@ struct tcm_vhost_tport {
  
  #include linux/vhost.h
  
 +/*
 + * Used by QEMU userspace to ensure a consistent vhost-scsi ABI.
 + *
 + * ABI Rev 0: All pre 2012 revisions used by prototype out-of-tree 
 code
 + * ABI Rev 1: 2012 version for v3.6 kernel merge candiate
 + */
 +
 +#define VHOST_SCSI_ABI_VERSION   1
 +
  struct vhost_scsi_target {
 + int abi_version;
   unsigned char vhost_wwpn[TRANSPORT_IQN_LEN];
   unsigned short vhost_tpgt;
  };
 @@ -88,3 +98,4 @@ struct vhost_scsi_target {
  /* VHOST_SCSI specific defines */
  #define VHOST_SCSI_SET_ENDPOINT _IOW(VHOST_VIRTIO, 0x40, struct 
 vhost_scsi_target)
  #define VHOST_SCSI_CLEAR_ENDPOINT _IOW(VHOST_VIRTIO, 0x41, struct 
 vhost_scsi_target)
 +#define VHOST_SCSI_GET_ABI_VERSION _IOW(VHOST_VIRTIO, 0x42, struct 
 vhost_scsi_target)

No, you just broke the ABI for version 0 here, that's not how you do
this at all.

   
   The intention of this patch is use ABI=1 as a starting point for
   tcm_vhost moving forward, with no back-wards compat for the ABI=0
   prototype userspace code because:
   
   - It's based on a slightly older version of QEMU (updating the QEMU 
   series now)
   - It does not have an GET_ABI_VERSION ioctl cmd (that starts with ABI=1)
   - It has a small user-base of target + virtio-scsi developers
   
   So I did consider just starting from ABI=0, but figured this would help
   reduce the confusion for QEMU userspace wrt to the vhost-scsi code
   that's been floating around out-of-tree for the last 2 years.
  
  There is no real user base beyond the handful of people who have hacked
  on this.  Adding the GET_ABI_VERSION ioctl() at this stage is fine,
  especially considering that the userspace code that talks to tcm_vhost
  isn't in mainline in userspace yet either.
 
 
 Do you have a preference for a VHOST_SCSI_ABI_VERSION starting point
 here..?
 
 I thought that v1 would be helpful to avoid confusion with the older
 userspace code, but don't really have a strong opinion either way..
 
 Let me know what you'd prefer here, and I'll make the changes to
 tcm_vhost + vhost-scsi patch series accordingly.

I don't think 0 for out-of-tree is needed.  I'd start at 0 but either
way is okay.

The main thing I would like to confirm is that this only versions the
tcm_vhost ioctls?  In that case a single version number works.

Stefan

--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RESEND 0/5] Add vhost-blk support

2012-07-18 Thread Stefan Hajnoczi
On Tue, Jul 17, 2012 at 4:09 PM, Michael S. Tsirkin m...@redhat.com wrote:
 On Fri, Jul 13, 2012 at 04:55:06PM +0800, Asias He wrote:

 Hi folks,

 [I am resending to fix the broken thread in the previous one.]

 This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk
 device accelerator. Compared to userspace virtio-blk implementation, 
 vhost-blk
 gives about 5% to 15% performance improvement.

 Same thing as tcm_host comment:

 It seems not 100% clear whether this driver will have major
 userspace using it. And if not, it would be very hard to support a
 driver when recent userspace does not use it in the end.

 I think a good idea for 3.6 would be to make it depend on
 CONFIG_STAGING.  Then we don't commit to an ABI.  For this, you can 
 add
 a separate Kconfig and source it from drivers/staging/Kconfig.  Maybe 
 it
 needs to be in a separate directory drivers/vhost/staging/Kconfig.

 I Cc'd the list of tcm_host in the hope that you can cooperate on this.


Adding it to staging allows more people to try it out, so that's a
good thing.  If I get a moment to play with it I'll let you know the
results.

Stefan
--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] virtio-scsi: hotplug support for virtio-scsi

2012-07-12 Thread Stefan Hajnoczi
On Thu, Jul 5, 2012 at 10:06 AM, Cong Meng m...@linux.vnet.ibm.com wrote:
 This patch implements the hotplug support for virtio-scsi.
 When there is a device attached/detached, the virtio-scsi driver will be
 signaled via event virtual queue and it will add/remove the scsi device
 in question automatically.

 v2: handle no_event event
 v3: add handle event dropped, and typo fix
 v4: Cancel event works when exit. Coding type fix.

 Signed-off-by: Sen Wang senw...@linux.vnet.ibm.com
 Signed-off-by: Cong Meng m...@linux.vnet.ibm.com
 ---
  drivers/scsi/virtio_scsi.c  |  127 
 ++-
  include/linux/virtio_scsi.h |9 +++
  2 files changed, 135 insertions(+), 1 deletions(-)

Reviewed-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
--
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html