Re: [Pv-drivers] [PATCH 0/6] VSOCK for Linux upstreaming
Hi Dave, > >> Instead, what I remember doing was deferring to the feedback these > >> folks received, stating that ideas that the virtio people had > >> mentioned should be considered instead. > >> > >> http://marc.info/?l=linux-netdev&m=135301515818462&w=2 > > > > I believe Andy replied to Anthony's AF_VMCHANNEL post and the > > differences between the proposed solutions. > > I'd much rather see a hypervisor neutral solution than a hypervisor > specific one which this certainly is. We've addressed this with the latest patch series, which I sent earlier today. vSockets now has support for pluggable transports, of which VMCI happens to be the first; all transport code is separated out into its own module. So the core is now hypervisor neutral. Given that, would you be willing to re-consider it, please? If at all possible, we'd like to make the current merge window. Thanks so much! - Andy ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[PATCH v3] tcm_vhost: Multi-target support
In order to take advantages of Paolo's multi-queue virito-scsi, we need multi-target support in tcm_vhost first. Otherwise all the requests go to one queue and other queues are idle. This patch makes: 1. All the targets under the wwpn is seen and can be used by guest. 2. No need to pass the tpgt number in struct vhost_scsi_target to tcm_vhost.ko. Only wwpn is needed. 3. We can always pass max_target = 255 to guest now, since we abort the request who's target id does not exist. Changes in v2: - Handle non-contiguous tpgt Changes in v3: - Simplfy lock in vhost_scsi_set_endpoint - Return -EEXIST when does not match Signed-off-by: Asias He --- drivers/vhost/tcm_vhost.c | 131 +- drivers/vhost/tcm_vhost.h | 4 +- 2 files changed, 85 insertions(+), 50 deletions(-) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index 704e4f6..81ecda5 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -59,8 +59,14 @@ enum { VHOST_SCSI_VQ_IO = 2, }; +#define VHOST_SCSI_MAX_TARGET 256 + struct vhost_scsi { - struct tcm_vhost_tpg *vs_tpg; /* Protected by vhost_scsi->dev.mutex */ + /* Protected by vhost_scsi->dev.mutex */ + struct tcm_vhost_tpg *vs_tpg[VHOST_SCSI_MAX_TARGET]; + char vs_vhost_wwpn[TRANSPORT_IQN_LEN]; + bool vs_endpoint; + struct vhost_dev dev; struct vhost_virtqueue vqs[3]; @@ -564,10 +570,10 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs) u32 exp_data_len, data_first, data_num, data_direction; unsigned out, in, i; int head, ret; + u8 target; /* Must use ioctl VHOST_SCSI_SET_ENDPOINT */ - tv_tpg = vs->vs_tpg; - if (unlikely(!tv_tpg)) + if (unlikely(!vs->vs_endpoint)) return; mutex_lock(&vq->mutex); @@ -635,6 +641,28 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs) break; } + /* Extract the tpgt */ + target = v_req.lun[1]; + tv_tpg = vs->vs_tpg[target]; + + /* Target does not exist, fail the request */ + if (unlikely(!tv_tpg)) { + struct virtio_scsi_cmd_resp __user *resp; + struct virtio_scsi_cmd_resp rsp; + + memset(&rsp, 0, sizeof(rsp)); + rsp.response = VIRTIO_SCSI_S_BAD_TARGET; + resp = vq->iov[out].iov_base; + ret = __copy_to_user(resp, &rsp, sizeof(rsp)); + if (!ret) + vhost_add_used_and_signal(&vs->dev, + &vs->vqs[2], head, 0); + else + pr_err("Faulted on virtio_scsi_cmd_resp\n"); + + continue; + } + exp_data_len = 0; for (i = 0; i < data_num; i++) exp_data_len += vq->iov[data_first + i].iov_len; @@ -743,7 +771,8 @@ static int vhost_scsi_set_endpoint( { struct tcm_vhost_tport *tv_tport; struct tcm_vhost_tpg *tv_tpg; - int index; + bool match = false; + int index, ret; mutex_lock(&vs->dev.mutex); /* Verify that ring has been setup correctly. */ @@ -754,7 +783,6 @@ static int vhost_scsi_set_endpoint( return -EFAULT; } } - mutex_unlock(&vs->dev.mutex); mutex_lock(&tcm_vhost_mutex); list_for_each_entry(tv_tpg, &tcm_vhost_list, tv_tpg_list) { @@ -769,30 +797,33 @@ static int vhost_scsi_set_endpoint( } tv_tport = tv_tpg->tport; - if (!strcmp(tv_tport->tport_name, t->vhost_wwpn) && - (tv_tpg->tport_tpgt == t->vhost_tpgt)) { - tv_tpg->tv_tpg_vhost_count++; - mutex_unlock(&tv_tpg->tv_tpg_mutex); - mutex_unlock(&tcm_vhost_mutex); - - mutex_lock(&vs->dev.mutex); - if (vs->vs_tpg) { - mutex_unlock(&vs->dev.mutex); - mutex_lock(&tv_tpg->tv_tpg_mutex); - tv_tpg->tv_tpg_vhost_count--; + if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) { + if (vs->vs_tpg[tv_tpg->tport_tpgt]) { mutex_unlock(&tv_tpg->tv_tpg_mutex); + mutex_unlock(&tcm_vhost_mutex); + mutex_unlock(&vs->dev.mutex); return -EEXIST; } - - vs->vs_tpg = tv_tpg; + tv_tpg->tv_tpg_vhost_count++; + vs->vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; smp_mb__after_atomic_inc(); -
Re: [PATCH v2] tcm_vhost: Multi-target support
On 02/05/2013 04:48 AM, Nicholas A. Bellinger wrote: > Hi Asias, > > On Fri, 2013-02-01 at 16:16 +0800, Asias He wrote: >> In order to take advantages of Paolo's multi-queue virito-scsi, we need >> multi-target support in tcm_vhost first. Otherwise all the requests go >> to one queue and other queues are idle. >> >> This patch makes: >> >> 1. All the targets under the wwpn is seen and can be used by guest. >> 2. No need to pass the tpgt number in struct vhost_scsi_target to >>tcm_vhost.ko. Only wwpn is needed. >> 3. We can always pass max_target = 255 to guest now, since we abort the >>request who's target id does not exist. >> >> Changes in v2: >> - Handle non-contiguous tpgt >> >> Signed-off-by: Asias He >> --- > > So this patch is looks pretty good. Just a few points below.. Thanks. > >> drivers/vhost/tcm_vhost.c | 117 >> ++ >> drivers/vhost/tcm_vhost.h | 4 +- >> 2 files changed, 79 insertions(+), 42 deletions(-) >> >> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c >> index 218deb6..f1481f0 100644 >> --- a/drivers/vhost/tcm_vhost.c >> +++ b/drivers/vhost/tcm_vhost.c >> @@ -59,8 +59,14 @@ enum { >> VHOST_SCSI_VQ_IO = 2, >> }; >> >> +#define VHOST_SCSI_MAX_TARGET 256 >> + >> struct vhost_scsi { >> -struct tcm_vhost_tpg *vs_tpg; /* Protected by vhost_scsi->dev.mutex */ >> +/* Protected by vhost_scsi->dev.mutex */ >> +struct tcm_vhost_tpg *vs_tpg[VHOST_SCSI_MAX_TARGET]; >> +char vs_vhost_wwpn[TRANSPORT_IQN_LEN]; >> +bool vs_endpoint; >> + >> struct vhost_dev dev; >> struct vhost_virtqueue vqs[3]; >> >> @@ -564,13 +570,11 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs) >> u32 exp_data_len, data_first, data_num, data_direction; >> unsigned out, in, i; >> int head, ret; >> +u8 target; >> >> /* Must use ioctl VHOST_SCSI_SET_ENDPOINT */ >> -tv_tpg = vs->vs_tpg; >> -if (unlikely(!tv_tpg)) { >> -pr_err("%s endpoint not set\n", __func__); >> +if (unlikely(!vs->vs_endpoint)) >> return; >> -} >> >> mutex_lock(&vq->mutex); >> vhost_disable_notify(&vs->dev, vq); >> @@ -637,6 +641,28 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs) >> break; >> } >> >> +/* Extract the tpgt */ >> +target = v_req.lun[1]; >> +tv_tpg = vs->vs_tpg[target]; >> + >> +/* Target does not exist, fail the request */ >> +if (unlikely(!tv_tpg)) { >> +struct virtio_scsi_cmd_resp __user *resp; >> +struct virtio_scsi_cmd_resp rsp; >> + >> +memset(&rsp, 0, sizeof(rsp)); >> +rsp.response = VIRTIO_SCSI_S_BAD_TARGET; >> +resp = vq->iov[out].iov_base; >> +ret = __copy_to_user(resp, &rsp, sizeof(rsp)); >> +if (!ret) >> +vhost_add_used_and_signal(&vs->dev, >> +&vs->vqs[2], head, 0); >> +else >> +pr_err("Faulted on virtio_scsi_cmd_resp\n"); >> + >> +continue; >> +} >> + >> exp_data_len = 0; >> for (i = 0; i < data_num; i++) >> exp_data_len += vq->iov[data_first + i].iov_len; >> @@ -745,6 +771,7 @@ static int vhost_scsi_set_endpoint( >> { >> struct tcm_vhost_tport *tv_tport; >> struct tcm_vhost_tpg *tv_tpg; >> +bool match = false; >> int index; >> >> mutex_lock(&vs->dev.mutex); >> @@ -771,14 +798,11 @@ static int vhost_scsi_set_endpoint( >> } >> tv_tport = tv_tpg->tport; >> >> -if (!strcmp(tv_tport->tport_name, t->vhost_wwpn) && >> -(tv_tpg->tport_tpgt == t->vhost_tpgt)) { >> +if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) { >> tv_tpg->tv_tpg_vhost_count++; >> -mutex_unlock(&tv_tpg->tv_tpg_mutex); >> -mutex_unlock(&tcm_vhost_mutex); >> >> mutex_lock(&vs->dev.mutex); >> -if (vs->vs_tpg) { >> +if (vs->vs_tpg[tv_tpg->tport_tpgt]) { >> mutex_unlock(&vs->dev.mutex); >> mutex_lock(&tv_tpg->tv_tpg_mutex); >> tv_tpg->tv_tpg_vhost_count--; >> @@ -786,15 +810,24 @@ static int vhost_scsi_set_endpoint( >> return -EEXIST; >> } >> >> -vs->vs_tpg = tv_tpg; >> +vs->vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; >> smp_mb__after_atomic_inc(); >> +match = true; >> mutex_unlock(&vs->dev.mutex); >> -return 0; >> } >> mutex_unlock(&tv_tpg->tv
Re: [PATCH 1/1] VSOCK: Introduce VM Sockets
Hi Gerd, > From my side the minimum requirement is to have > vsock_(un)register_transport calls available, so it is possible to > write a virtio transport module without having to patch vsock code > to hook it up. We've done exactly that. It's now split into two separate modules, with the core module offering precisely the calls you've requested. All transport code is now its own module. So now you should be able to add a vsock_transport_virtio module, which can register with the core. > >>> + struct { > >>> + /* For DGRAMs. */ > >>> + struct vmci_handle dg_handle; > >> > >> Yep, should be a pointer where transports can hook in their > >> private > >> data. > > > > I'm fixing this. > > Yes, please, that is needed too to get started with virtio support. Fixed: it's now a void * in the core socket structure. I believe it's now at the point where you can start working on the virtio transport, and we'll address API issues as they arise and refine it as necessary. So please take one last look and let us know what you think; hopefully we can make the current merge window :) Thanks! - Andy ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[PATCH 0/1] VM Sockets for Linux upstreaming
In an effort to improve the out-of-the-box experience with Linux kernels for VMware users, VMware is working on readying the VM Sockets (VSOCK, formerly VMCI Sockets) (vsock) kernel module for inclusion in the Linux kernel. The purpose of this post is to acquire feedback on the vsock kernel module. Unlike previous submissions, where the new socket family was entirely reliant on VMware's VMCI PCI device (and thus VMware's hypervisor), VM Sockets is now *completely* separated out into two parts, each in its own module: o Core socket code, which is transport-neutral and invokes transport callbacks to communicate with the hypervisor. This is vsock.ko. o A VMCI transport, which communicates over VMCI with the VMware hypervisor. This is vmw_vsock_vmci_transport.ko, and it registers with the core module as a transport. This should provide a path to introducing additional transports, for example virtio, with the ultimate goal being to make this new socket family hypervisor-neutral. Andy King (1): VSOCK: Introduce VM Sockets include/linux/socket.h |4 +- include/uapi/linux/vm_sockets.h | 169 +++ net/Kconfig |1 + net/Makefile |1 + net/vmw_vsock/Kconfig| 28 + net/vmw_vsock/Makefile |7 + net/vmw_vsock/af_vsock.c | 2085 ++ net/vmw_vsock/af_vsock.h | 170 +++ net/vmw_vsock/vmci_transport.c | 2050 + net/vmw_vsock/vmci_transport.h | 139 ++ net/vmw_vsock/vmci_transport_notify.c| 680 + net/vmw_vsock/vmci_transport_notify.h| 83 + net/vmw_vsock/vmci_transport_notify_qstate.c | 438 ++ net/vmw_vsock/vsock_addr.c | 128 ++ net/vmw_vsock/vsock_addr.h | 36 + net/vmw_vsock/vsock_version.h| 22 + 16 files changed, 6040 insertions(+), 1 deletions(-) create mode 100644 include/uapi/linux/vm_sockets.h create mode 100644 net/vmw_vsock/Kconfig create mode 100644 net/vmw_vsock/Makefile create mode 100644 net/vmw_vsock/af_vsock.c create mode 100644 net/vmw_vsock/af_vsock.h create mode 100644 net/vmw_vsock/vmci_transport.c create mode 100644 net/vmw_vsock/vmci_transport.h create mode 100644 net/vmw_vsock/vmci_transport_notify.c create mode 100644 net/vmw_vsock/vmci_transport_notify.h create mode 100644 net/vmw_vsock/vmci_transport_notify_qstate.c create mode 100644 net/vmw_vsock/vsock_addr.c create mode 100644 net/vmw_vsock/vsock_addr.h create mode 100644 net/vmw_vsock/vsock_version.h -- 1.7.4.1 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH 1/6] virtio_host: host-side implementation of virtio rings.
Sjur Brændeland writes: > Hi Rusty, > > On Thu, Jan 17, 2013 at 11:29 AM, Rusty Russell wrote: > >> Getting use of virtio rings correct is tricky, and a recent patch saw >> an implementation of in-kernel rings (as separate from userspace). >> >> This patch attempts to abstract the business of dealing with the >> virtio ring layout from the access (userspace or direct); to do this, >> we use function pointers, which gcc inlines correctly. > > > I have been using your patches for a while in my test setup without any > issues with vringh. The only thing I'm missing is export of symbols. > My current caif_virtio driver expects vringh to be a module exporting > symbols. Is this something you are planning to add? Yes, sure. There may be some more minor changes, but nothing radical. > I guess my vringh related stuff should go into your tree together with your > vringh patches... > Would you be willing to take this via your tree, provided that I get acks > from the right people? Yes please. Now I have set up a test rig for vhost (unfortunately not with 10Ge) I can make progress on vhost adaption. Thanks, Rusty. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH v2] tcm_vhost: Multi-target support
Hi Asias, On Fri, 2013-02-01 at 16:16 +0800, Asias He wrote: > In order to take advantages of Paolo's multi-queue virito-scsi, we need > multi-target support in tcm_vhost first. Otherwise all the requests go > to one queue and other queues are idle. > > This patch makes: > > 1. All the targets under the wwpn is seen and can be used by guest. > 2. No need to pass the tpgt number in struct vhost_scsi_target to >tcm_vhost.ko. Only wwpn is needed. > 3. We can always pass max_target = 255 to guest now, since we abort the >request who's target id does not exist. > > Changes in v2: > - Handle non-contiguous tpgt > > Signed-off-by: Asias He > --- So this patch is looks pretty good. Just a few points below.. > drivers/vhost/tcm_vhost.c | 117 > ++ > drivers/vhost/tcm_vhost.h | 4 +- > 2 files changed, 79 insertions(+), 42 deletions(-) > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > index 218deb6..f1481f0 100644 > --- a/drivers/vhost/tcm_vhost.c > +++ b/drivers/vhost/tcm_vhost.c > @@ -59,8 +59,14 @@ enum { > VHOST_SCSI_VQ_IO = 2, > }; > > +#define VHOST_SCSI_MAX_TARGET 256 > + > struct vhost_scsi { > - struct tcm_vhost_tpg *vs_tpg; /* Protected by vhost_scsi->dev.mutex */ > + /* Protected by vhost_scsi->dev.mutex */ > + struct tcm_vhost_tpg *vs_tpg[VHOST_SCSI_MAX_TARGET]; > + char vs_vhost_wwpn[TRANSPORT_IQN_LEN]; > + bool vs_endpoint; > + > struct vhost_dev dev; > struct vhost_virtqueue vqs[3]; > > @@ -564,13 +570,11 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs) > u32 exp_data_len, data_first, data_num, data_direction; > unsigned out, in, i; > int head, ret; > + u8 target; > > /* Must use ioctl VHOST_SCSI_SET_ENDPOINT */ > - tv_tpg = vs->vs_tpg; > - if (unlikely(!tv_tpg)) { > - pr_err("%s endpoint not set\n", __func__); > + if (unlikely(!vs->vs_endpoint)) > return; > - } > > mutex_lock(&vq->mutex); > vhost_disable_notify(&vs->dev, vq); > @@ -637,6 +641,28 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs) > break; > } > > + /* Extract the tpgt */ > + target = v_req.lun[1]; > + tv_tpg = vs->vs_tpg[target]; > + > + /* Target does not exist, fail the request */ > + if (unlikely(!tv_tpg)) { > + struct virtio_scsi_cmd_resp __user *resp; > + struct virtio_scsi_cmd_resp rsp; > + > + memset(&rsp, 0, sizeof(rsp)); > + rsp.response = VIRTIO_SCSI_S_BAD_TARGET; > + resp = vq->iov[out].iov_base; > + ret = __copy_to_user(resp, &rsp, sizeof(rsp)); > + if (!ret) > + vhost_add_used_and_signal(&vs->dev, > + &vs->vqs[2], head, 0); > + else > + pr_err("Faulted on virtio_scsi_cmd_resp\n"); > + > + continue; > + } > + > exp_data_len = 0; > for (i = 0; i < data_num; i++) > exp_data_len += vq->iov[data_first + i].iov_len; > @@ -745,6 +771,7 @@ static int vhost_scsi_set_endpoint( > { > struct tcm_vhost_tport *tv_tport; > struct tcm_vhost_tpg *tv_tpg; > + bool match = false; > int index; > > mutex_lock(&vs->dev.mutex); > @@ -771,14 +798,11 @@ static int vhost_scsi_set_endpoint( > } > tv_tport = tv_tpg->tport; > > - if (!strcmp(tv_tport->tport_name, t->vhost_wwpn) && > - (tv_tpg->tport_tpgt == t->vhost_tpgt)) { > + if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) { > tv_tpg->tv_tpg_vhost_count++; > - mutex_unlock(&tv_tpg->tv_tpg_mutex); > - mutex_unlock(&tcm_vhost_mutex); > > mutex_lock(&vs->dev.mutex); > - if (vs->vs_tpg) { > + if (vs->vs_tpg[tv_tpg->tport_tpgt]) { > mutex_unlock(&vs->dev.mutex); > mutex_lock(&tv_tpg->tv_tpg_mutex); > tv_tpg->tv_tpg_vhost_count--; > @@ -786,15 +810,24 @@ static int vhost_scsi_set_endpoint( > return -EEXIST; > } > > - vs->vs_tpg = tv_tpg; > + vs->vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; > smp_mb__after_atomic_inc(); > + match = true; > mutex_unlock(&vs->dev.mutex); > - return 0; > } > mutex_unlock(&tv_tpg->tv_tpg_mutex); > } > mutex_unlock(&tcm_vhost_mutex); > - return -EINVAL; > + > + mutex_lock(&vs->dev.mu
Re: [PATCH 1/6] virtio_host: host-side implementation of virtio rings.
Hi Rusty, On Thu, Jan 17, 2013 at 11:29 AM, Rusty Russell wrote: > Getting use of virtio rings correct is tricky, and a recent patch saw > an implementation of in-kernel rings (as separate from userspace). > > This patch attempts to abstract the business of dealing with the > virtio ring layout from the access (userspace or direct); to do this, > we use function pointers, which gcc inlines correctly. I have been using your patches for a while in my test setup without any issues with vringh. The only thing I'm missing is export of symbols. My current caif_virtio driver expects vringh to be a module exporting symbols. Is this something you are planning to add? I guess my vringh related stuff should go into your tree together with your vringh patches... Would you be willing to take this via your tree, provided that I get acks from the right people? Regards, Sjur ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH 0/8] drivers/net: Remove unnecessary alloc/OOM messages
From: Joe Perches Date: Sun, 3 Feb 2013 19:28:07 -0800 > Remove all the OOM messages that follow kernel alloc > failures as there is already a generic equivalent to > these messages in the mm subsystem. > > Joe Perches (8): > caif: Remove unnecessary alloc/OOM messages > can: Remove unnecessary alloc/OOM messages > ethernet: Remove unnecessary alloc/OOM messages, alloc cleanups > drivers: net: usb: Remove unnecessary alloc/OOM messages > wan: Remove unnecessary alloc/OOM messages > wimax: Remove unnecessary alloc/OOM messages, alloc cleanups > wireless: Remove unnecessary alloc/OOM messages, alloc cleanups > drivers:net:misc: Remove unnecessary alloc/OOM messages Series applied, thanks Joe. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization