On Wed, Apr 7, 2021 at 2:51 AM Jorgen Hansen <jhan...@vmware.com> wrote: > > > > On 6 Apr 2021, at 20:31, Jiang Wang <jiang.w...@bytedance.com> wrote: > > > > From: "jiang.wang" <jiang.w...@bytedance.com> > > > > Currently, only VMCI supports dgram sockets. To supported > > nested VM use case, this patch removes transport_dgram and > > uses transport_g2h and transport_h2g for dgram too. > > Could you provide some background for introducing this change - are you > looking at introducing datagrams for a different transport? VMCI datagrams > already support the nested use case,
Yes, I am trying to introduce datagram for virtio transport. I wrote a spec patch for virtio dgram support and also a code patch, but the code patch is still WIP. When I wrote this commit message, I was thinking nested VM is the same as multiple transport support. But now, I realize they are different. Nested VMs may use the same virtualization layer(KVM on KVM), or different virtualization layers (KVM on ESXi). Thanks for letting me know that VMCI already supported nested use cases. I think you mean VMCI on VMCI, right? > but if we need to support multiple datagram > transports we need to rework how we administer port assignment for datagrams. > One specific issue is that the vmci transport won’t receive any datagrams for > a > port unless the datagram socket has already been assigned the vmci transport > and the port bound to the underlying VMCI device (see below for more details). > I see. > > The transport is assgined when sending every packet and > > receiving every packet on dgram sockets. > > Is the intent that the same datagram socket can be used for sending packets > both > In the host to guest, and the guest to directions? Nope. One datagram socket will only send packets to one direction, either to the host or to the guest. My above description is wrong. When sending packets, the transport is assigned with the first packet (with auto_bind). The problem is when receiving packets. The listener can bind to the VMADDR_CID_ANY address. Then it is unclear which transport we should use. For stream sockets, there will be a new socket for each connection, and transport can be decided at that time. For datagram sockets, I am not sure how to handle that. For VMCI, does the same transport can be used for both receiving from host and from the guest? For virtio, the h2g and g2h transports are different,, so we have to choose one of them. My original thought is to wait until the first packet arrives. Another idea is that we always bind to host addr and use h2g transport because I think that might be more common. If a listener wants to recv packets from the host, then it should bind to the guest addr instead of CID_ANY. Any other suggestions? > Also, as mentioned above the vSocket datagram needs to be bound to a port in > the > VMCI transport before we can receive any datagrams on that port. This means > that > vmci_transport_recv_dgram_cb won’t be called unless it is already associated > with > a socket as the transport, and therefore we can’t delay the transport > assignment to > that point. Got it. Thanks. Please see the above replies. > > > Signed-off-by: Jiang Wang <jiang.w...@bytedance.com> > > --- > > This patch is not tested. I don't have a VMWare testing > > environment. Could someone help me to test it? > > > > include/net/af_vsock.h | 2 -- > > net/vmw_vsock/af_vsock.c | 63 > > +++++++++++++++++++++--------------------- > > net/vmw_vsock/vmci_transport.c | 20 +++++++++----- > > 3 files changed, 45 insertions(+), 40 deletions(-) > > > > diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h > > index b1c717286993..aba241e0d202 100644 > > --- a/include/net/af_vsock.h > > +++ b/include/net/af_vsock.h > > @@ -96,8 +96,6 @@ struct vsock_transport_send_notify_data { > > #define VSOCK_TRANSPORT_F_H2G 0x00000001 > > /* Transport provides guest->host communication */ > > #define VSOCK_TRANSPORT_F_G2H 0x00000002 > > -/* Transport provides DGRAM communication */ > > -#define VSOCK_TRANSPORT_F_DGRAM 0x00000004 > > /* Transport provides local (loopback) communication */ > > #define VSOCK_TRANSPORT_F_LOCAL 0x00000008 > > > > diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c > > index 92a72f0e0d94..7739ab2521a1 100644 > > --- a/net/vmw_vsock/af_vsock.c > > +++ b/net/vmw_vsock/af_vsock.c > > @@ -449,8 +449,6 @@ int vsock_assign_transport(struct vsock_sock *vsk, > > struct vsock_sock *psk) > > > > switch (sk->sk_type) { > > case SOCK_DGRAM: > > - new_transport = transport_dgram; > > - break; > > case SOCK_STREAM: > > if (vsock_use_local_transport(remote_cid)) > > new_transport = transport_local; > > @@ -1096,7 +1094,6 @@ static int vsock_dgram_sendmsg(struct socket *sock, > > struct msghdr *msg, > > struct sock *sk; > > struct vsock_sock *vsk; > > struct sockaddr_vm *remote_addr; > > - const struct vsock_transport *transport; > > > > if (msg->msg_flags & MSG_OOB) > > return -EOPNOTSUPP; > > @@ -1108,25 +1105,30 @@ static int vsock_dgram_sendmsg(struct socket *sock, > > struct msghdr *msg, > > > > lock_sock(sk); > > > > - transport = vsk->transport; > > - > > err = vsock_auto_bind(vsk); > > if (err) > > goto out; > > > > - > > /* If the provided message contains an address, use that. Otherwise > > * fall back on the socket's remote handle (if it has been connected). > > */ > > if (msg->msg_name && > > vsock_addr_cast(msg->msg_name, msg->msg_namelen, > > &remote_addr) == 0) { > > + vsock_addr_init(&vsk->remote_addr, remote_addr->svm_cid, > > + remote_addr->svm_port); > > + > > + err = vsock_assign_transport(vsk, NULL); > > + if (err) { > > + err = -EINVAL; > > + goto out; > > + } > > + > > /* Ensure this address is of the right type and is a valid > > * destination. > > */ > > - > > if (remote_addr->svm_cid == VMADDR_CID_ANY) > > - remote_addr->svm_cid = transport->get_local_cid(); > > + remote_addr->svm_cid = > > vsk->transport->get_local_cid(); > > > > if (!vsock_addr_bound(remote_addr)) { > > err = -EINVAL; > > @@ -1136,7 +1138,7 @@ static int vsock_dgram_sendmsg(struct socket *sock, > > struct msghdr *msg, > > remote_addr = &vsk->remote_addr; > > > > if (remote_addr->svm_cid == VMADDR_CID_ANY) > > - remote_addr->svm_cid = transport->get_local_cid(); > > + remote_addr->svm_cid = > > vsk->transport->get_local_cid(); > > > > /* XXX Should connect() or this function ensure remote_addr is > > * bound? > > @@ -1150,13 +1152,13 @@ static int vsock_dgram_sendmsg(struct socket *sock, > > struct msghdr *msg, > > goto out; > > } > > > > - if (!transport->dgram_allow(remote_addr->svm_cid, > > + if (!vsk->transport->dgram_allow(remote_addr->svm_cid, > > remote_addr->svm_port)) { > > err = -EINVAL; > > goto out; > > } > > > > - err = transport->dgram_enqueue(vsk, remote_addr, msg, len); > > + err = vsk->transport->dgram_enqueue(vsk, remote_addr, msg, len); > > > > out: > > release_sock(sk); > > @@ -1191,13 +1193,20 @@ static int vsock_dgram_connect(struct socket *sock, > > if (err) > > goto out; > > > > + memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr)); > > + > > + err = vsock_assign_transport(vsk, NULL); > > + if (err) { > > + err = -EINVAL; > > + goto out; > > + } > > + > > if (!vsk->transport->dgram_allow(remote_addr->svm_cid, > > remote_addr->svm_port)) { > > err = -EINVAL; > > goto out; > > } > > > > - memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr)); > > sock->state = SS_CONNECTED; > > > > out: > > @@ -1209,6 +1218,16 @@ static int vsock_dgram_recvmsg(struct socket *sock, > > struct msghdr *msg, > > size_t len, int flags) > > { > > struct vsock_sock *vsk = vsock_sk(sock->sk); > > + long timeo; > > + > > + timeo = sock_rcvtimeo(sock->sk, flags & MSG_DONTWAIT); > > + do { > > + if (vsk->transport) > > + break; > > + } while (timeo && !vsk->transport); > > + > > + if (!vsk->transport) > > + return -EAGAIN; > > > > return vsk->transport->dgram_dequeue(vsk, msg, len, flags); > > } > > @@ -2055,14 +2074,6 @@ static int vsock_create(struct net *net, struct > > socket *sock, > > > > vsk = vsock_sk(sk); > > > > - if (sock->type == SOCK_DGRAM) { > > - ret = vsock_assign_transport(vsk, NULL); > > - if (ret < 0) { > > - sock_put(sk); > > - return ret; > > - } > > - } > > - > > vsock_insert_unbound(vsk); > > > > return 0; > > @@ -2182,7 +2193,7 @@ EXPORT_SYMBOL_GPL(vsock_core_get_transport); > > > > int vsock_core_register(const struct vsock_transport *t, int features) > > { > > - const struct vsock_transport *t_h2g, *t_g2h, *t_dgram, *t_local; > > + const struct vsock_transport *t_h2g, *t_g2h, *t_local; > > int err = mutex_lock_interruptible(&vsock_register_mutex); > > > > if (err) > > @@ -2190,7 +2201,6 @@ int vsock_core_register(const struct vsock_transport > > *t, int features) > > > > t_h2g = transport_h2g; > > t_g2h = transport_g2h; > > - t_dgram = transport_dgram; > > t_local = transport_local; > > > > if (features & VSOCK_TRANSPORT_F_H2G) { > > @@ -2209,14 +2219,6 @@ int vsock_core_register(const struct vsock_transport > > *t, int features) > > t_g2h = t; > > } > > > > - if (features & VSOCK_TRANSPORT_F_DGRAM) { > > - if (t_dgram) { > > - err = -EBUSY; > > - goto err_busy; > > - } > > - t_dgram = t; > > - } > > - > > if (features & VSOCK_TRANSPORT_F_LOCAL) { > > if (t_local) { > > err = -EBUSY; > > @@ -2227,7 +2229,6 @@ int vsock_core_register(const struct vsock_transport > > *t, int features) > > > > transport_h2g = t_h2g; > > transport_g2h = t_g2h; > > - transport_dgram = t_dgram; > > transport_local = t_local; > > > > err_busy: > > diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c > > index 8b65323207db..42ea2a1c92ce 100644 > > --- a/net/vmw_vsock/vmci_transport.c > > +++ b/net/vmw_vsock/vmci_transport.c > > @@ -613,6 +613,7 @@ static int vmci_transport_recv_dgram_cb(void *data, > > struct vmci_datagram *dg) > > size_t size; > > struct sk_buff *skb; > > struct vsock_sock *vsk; > > + int err; > > > > sk = (struct sock *)data; > > > > @@ -629,6 +630,17 @@ static int vmci_transport_recv_dgram_cb(void *data, > > struct vmci_datagram *dg) > > if (!vmci_transport_allow_dgram(vsk, dg->src.context)) > > return VMCI_ERROR_NO_ACCESS; > > > > + vsock_addr_init(&vsk->remote_addr, dg->src.context, > > + dg->src.resource); > > + > > + bh_lock_sock(sk); > > + if (!sock_owned_by_user(sk)) { > > + err = vsock_assign_transport(vsk, NULL); > > + if (err) > > + return err; > > + } > > + bh_unlock_sock(sk); > > + > > size = VMCI_DG_SIZE(dg); > > > > /* Attach the packet to the socket's receive queue as an sk_buff. */ > > @@ -2093,13 +2105,7 @@ static int __init vmci_transport_init(void) > > goto err_destroy_stream_handle; > > } > > > > - /* Register only with dgram feature, other features (H2G, G2H) will be > > - * registered when the first host or guest becomes active. > > - */ > > - err = vsock_core_register(&vmci_transport, VSOCK_TRANSPORT_F_DGRAM); > > - if (err < 0) > > - goto err_unsubscribe; > > - > > + /* H2G, G2H will be registered when the first host or guest becomes > > active. */ > > err = vmci_register_vsock_callback(vmci_vsock_transport_cb); > > if (err < 0) > > goto err_unregister; > > -- > > 2.11.0 > > > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization