Re: Re: [RFC] vsock: add multiple transports support for dgram

2021-04-13 Thread Jiang Wang .
Hi Jorgen,

Thanks for the detailed explanation and I agree with you. For the bind list,
my  prototype is doing
something similar to that. I will double check it.

Hi Stefano,

I don't have other questions for now. Thanks.

Regards,

Jiang

On Tue, Apr 13, 2021 at 5:52 AM Stefano Garzarella  wrote:
>
> On Tue, Apr 13, 2021 at 12:12:50PM +, Jorgen Hansen wrote:
> >
> >
> >On 12 Apr 2021, at 20:53, Jiang Wang . 
> >mailto:jiang.w...@bytedance.com>> wrote:
> >
> >On Mon, Apr 12, 2021 at 7:04 AM Stefano Garzarella 
> >mailto:sgarz...@redhat.com>> wrote:
> >
> >Hi Jiang,
> >thanks for re-starting the multi-transport support for dgram!
> >
> >No problem.
> >
> >On Wed, Apr 07, 2021 at 11:25:36AM -0700, Jiang Wang . wrote:
> >On Wed, Apr 7, 2021 at 2:51 AM Jorgen Hansen 
> >mailto:jhan...@vmware.com>> wrote:
> >
> >
> >On 6 Apr 2021, at 20:31, Jiang Wang 
> >mailto:jiang.w...@bytedance.com>> wrote:
> >
> >From: "jiang.wang" 
> >mailto:jiang.w...@bytedance.com>>
> >
> >Currently, only VMCI supports dgram sockets. To supported
> >nested VM use case, this patch removes transport_dgram and
> >uses transport_g2h and transport_h2g for dgram too.
> >
> >I agree on this part, I think that's the direction to go.
> >transport_dgram was added as a shortcut.
> >
> >Got it.
> >
> >
> >Could you provide some background for introducing this change - are you
> >looking at introducing datagrams for a different transport? VMCI datagrams
> >already support the nested use case,
> >
> >Yes, I am trying to introduce datagram for virtio transport. I wrote a
> >spec patch for
> >virtio dgram support and also a code patch, but the code patch is still WIP.
> >When I wrote this commit message, I was thinking nested VM is the same as
> >multiple transport support. But now, I realize they are different.
> >Nested VMs may use
> >the same virtualization layer(KVM on KVM), or different virtualization layers
> >(KVM on ESXi). Thanks for letting me know that VMCI already supported nested
> >use cases. I think you mean VMCI on VMCI, right?
> >
> >but if we need to support multiple datagram
> >transports we need to rework how we administer port assignment for datagrams.
> >One specific issue is that the vmci transport won’t receive any datagrams 
> >for a
> >port unless the datagram socket has already been assigned the vmci transport
> >and the port bound to the underlying VMCI device (see below for more 
> >details).
> >
> >I see.
> >
> >The transport is assgined when sending every packet and
> >receiving every packet on dgram sockets.
> >
> >Is the intent that the same datagram socket can be used for sending packets 
> >both
> >In the host to guest, and the guest to directions?
> >
> >Nope. One datagram socket will only send packets to one direction, either to 
> >the
> >host or to the guest. My above description is wrong. When sending packets, 
> >the
> >transport is assigned with the first packet (with auto_bind).
> >
> >I'm not sure this is right.
> >The auto_bind on the first packet should only assign a local port to the
> >socket, but does not affect the transport to be used.
> >
> >A user could send one packet to the nested guest and another to the host
> >using the same socket, or am I wrong?
> >
> >OK. I think you are right.
> >
> >
> >The problem is when receiving packets. The listener can bind to the
> >VMADDR_CID_ANY
> >address. Then it is unclear which transport we should use. For stream
> >sockets, there will be a new socket for each connection, and transport
> >can be decided
> >at that time. For datagram sockets, I am not sure how to handle that.
> >
> >yes, this I think is the main problem, but maybe the sender one is even
> >more complicated.
> >
> >Maybe we should remove the 1:1 association we have now between vsk and
> >transport.
> >
> >Yes, I thought about that too. One idea is to define two transports in vsk.
> >For sending pkt, we can pick the right transport when we get the packet, like
> >in virtio_transport_send_pkt_info(). For receiving pkts, we have to check
> >and call both
> >transports dequeue callbacks if the local cid is CID_ANY.
> >
> >At least for DGRAM, for connected sockets I think the association makes
> >sense.
> >
> >Yeah. For a connected socket, we will only use one transport.
> >
> >For VMCI, does the same transport can be used for both receiving from
> >host and from
> >the guest?
> >
> >Yes, they're registered at different times, but it's the same transport.
> >
> >
> >For virtio, the h2g and g2h transports are different,, so we have to
> >choose
> >one of them. My original thought is to wait until the first packet
> >arrives.
> >
> >Another idea is that we always bind to host addr and use h2g
> >transport because I think that might
> >be more common. If a listener wants to recv packets from the host, then
> >it
> >should bind to the guest addr instead of CID_ANY.
> >
> >Yes, I remember we discussed this idea, this would simplify the
> >receiving, but there is still the issue of 

Re: [RFC] vsock: add multiple transports support for dgram

2021-04-13 Thread Jiang Wang .
On Tue, Apr 13, 2021 at 2:02 AM Jorgen Hansen  wrote:
>
>
>
> > On 7 Apr 2021, at 20:25, Jiang Wang .  wrote:
> >
> > On Wed, Apr 7, 2021 at 2:51 AM Jorgen Hansen  wrote:
> >>
> >>
> >>> On 6 Apr 2021, at 20:31, Jiang Wang  wrote:
> >>>
> >>> From: "jiang.wang" 
> >>>
> >>> Currently, only VMCI supports dgram sockets. To supported
> >>> nested VM use case, this patch removes transport_dgram and
> >>> uses transport_g2h and transport_h2g for dgram too.
> >>
> >> Could you provide some background for introducing this change - are you
> >> looking at introducing datagrams for a different transport? VMCI datagrams
> >> already support the nested use case,
> >
> > Yes, I am trying to introduce datagram for virtio transport. I wrote a
> > spec patch for
> > virtio dgram support and also a code patch, but the code patch is still WIP.
>
> Oh ok. Cool. I must have missed the spec patch - could you provide a 
> reference to
> it?

Sure. here is the link:
https://lists.linuxfoundation.org/pipermail/virtualization/2021-April/053543.html

> > When I wrote this commit message, I was thinking nested VM is the same as
> > multiple transport support. But now, I realize they are different.
> > Nested VMs may use
> > the same virtualization layer(KVM on KVM), or different virtualization 
> > layers
> > (KVM on ESXi). Thanks for letting me know that VMCI already supported nested
> > use cases. I think you mean VMCI on VMCI, right?
>
> Right, only VMCI on VMCI.

Got it. thanks.

> I’ll respond to Stefano’s email for the rest of the discussion.
>
> Thanks,
> Jorgen
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [RFC] vsock: add multiple transports support for dgram

2021-04-13 Thread Stefano Garzarella

On Tue, Apr 13, 2021 at 12:12:50PM +, Jorgen Hansen wrote:



On 12 Apr 2021, at 20:53, Jiang Wang . 
mailto:jiang.w...@bytedance.com>> wrote:

On Mon, Apr 12, 2021 at 7:04 AM Stefano Garzarella 
mailto:sgarz...@redhat.com>> wrote:

Hi Jiang,
thanks for re-starting the multi-transport support for dgram!

No problem.

On Wed, Apr 07, 2021 at 11:25:36AM -0700, Jiang Wang . wrote:
On Wed, Apr 7, 2021 at 2:51 AM Jorgen Hansen 
mailto:jhan...@vmware.com>> wrote:


On 6 Apr 2021, at 20:31, Jiang Wang 
mailto:jiang.w...@bytedance.com>> wrote:

From: "jiang.wang" 
mailto:jiang.w...@bytedance.com>>

Currently, only VMCI supports dgram sockets. To supported
nested VM use case, this patch removes transport_dgram and
uses transport_g2h and transport_h2g for dgram too.

I agree on this part, I think that's the direction to go.
transport_dgram was added as a shortcut.

Got it.


Could you provide some background for introducing this change - are you
looking at introducing datagrams for a different transport? VMCI datagrams
already support the nested use case,

Yes, I am trying to introduce datagram for virtio transport. I wrote a
spec patch for
virtio dgram support and also a code patch, but the code patch is still WIP.
When I wrote this commit message, I was thinking nested VM is the same as
multiple transport support. But now, I realize they are different.
Nested VMs may use
the same virtualization layer(KVM on KVM), or different virtualization layers
(KVM on ESXi). Thanks for letting me know that VMCI already supported nested
use cases. I think you mean VMCI on VMCI, right?

but if we need to support multiple datagram
transports we need to rework how we administer port assignment for datagrams.
One specific issue is that the vmci transport won’t receive any datagrams for a
port unless the datagram socket has already been assigned the vmci transport
and the port bound to the underlying VMCI device (see below for more details).

I see.

The transport is assgined when sending every packet and
receiving every packet on dgram sockets.

Is the intent that the same datagram socket can be used for sending packets both
In the host to guest, and the guest to directions?

Nope. One datagram socket will only send packets to one direction, either to the
host or to the guest. My above description is wrong. When sending packets, the
transport is assigned with the first packet (with auto_bind).

I'm not sure this is right.
The auto_bind on the first packet should only assign a local port to the
socket, but does not affect the transport to be used.

A user could send one packet to the nested guest and another to the host
using the same socket, or am I wrong?

OK. I think you are right.


The problem is when receiving packets. The listener can bind to the
VMADDR_CID_ANY
address. Then it is unclear which transport we should use. For stream
sockets, there will be a new socket for each connection, and transport
can be decided
at that time. For datagram sockets, I am not sure how to handle that.

yes, this I think is the main problem, but maybe the sender one is even
more complicated.

Maybe we should remove the 1:1 association we have now between vsk and
transport.

Yes, I thought about that too. One idea is to define two transports in vsk.
For sending pkt, we can pick the right transport when we get the packet, like
in virtio_transport_send_pkt_info(). For receiving pkts, we have to check
and call both
transports dequeue callbacks if the local cid is CID_ANY.

At least for DGRAM, for connected sockets I think the association makes
sense.

Yeah. For a connected socket, we will only use one transport.

For VMCI, does the same transport can be used for both receiving from
host and from
the guest?

Yes, they're registered at different times, but it's the same transport.


For virtio, the h2g and g2h transports are different,, so we have to
choose
one of them. My original thought is to wait until the first packet
arrives.

Another idea is that we always bind to host addr and use h2g
transport because I think that might
be more common. If a listener wants to recv packets from the host, then
it
should bind to the guest addr instead of CID_ANY.

Yes, I remember we discussed this idea, this would simplify the
receiving, but there is still the issue of a user wanting to receive
packets from both the nested guest and the host.

OK. Agree.

Any other suggestions?


I think one solution could be to remove the 1:1 association between
DGRAM socket and transport.

IIUC VMCI creates a skb for each received packet and queues it through
sk_receive_skb() directly in the struct sock.

Then the .dgram_dequeue() callback dequeues them using
skb_recv_datagram().

We can move these parts in the vsock core, and create some helpers to
allow the transports to enqueue received DGRAM packets in the same way
(and with the same format) directly in the struct sock.


I agree to use skbs (and move them to vscok core). We have another use case

Re: [RFC] vsock: add multiple transports support for dgram

2021-04-13 Thread Jorgen Hansen


On 12 Apr 2021, at 20:53, Jiang Wang . 
mailto:jiang.w...@bytedance.com>> wrote:

On Mon, Apr 12, 2021 at 7:04 AM Stefano Garzarella 
mailto:sgarz...@redhat.com>> wrote:

Hi Jiang,
thanks for re-starting the multi-transport support for dgram!

No problem.

On Wed, Apr 07, 2021 at 11:25:36AM -0700, Jiang Wang . wrote:
On Wed, Apr 7, 2021 at 2:51 AM Jorgen Hansen 
mailto:jhan...@vmware.com>> wrote:


On 6 Apr 2021, at 20:31, Jiang Wang 
mailto:jiang.w...@bytedance.com>> wrote:

From: "jiang.wang" 
mailto:jiang.w...@bytedance.com>>

Currently, only VMCI supports dgram sockets. To supported
nested VM use case, this patch removes transport_dgram and
uses transport_g2h and transport_h2g for dgram too.

I agree on this part, I think that's the direction to go.
transport_dgram was added as a shortcut.

Got it.


Could you provide some background for introducing this change - are you
looking at introducing datagrams for a different transport? VMCI datagrams
already support the nested use case,

Yes, I am trying to introduce datagram for virtio transport. I wrote a
spec patch for
virtio dgram support and also a code patch, but the code patch is still WIP.
When I wrote this commit message, I was thinking nested VM is the same as
multiple transport support. But now, I realize they are different.
Nested VMs may use
the same virtualization layer(KVM on KVM), or different virtualization layers
(KVM on ESXi). Thanks for letting me know that VMCI already supported nested
use cases. I think you mean VMCI on VMCI, right?

but if we need to support multiple datagram
transports we need to rework how we administer port assignment for datagrams.
One specific issue is that the vmci transport won’t receive any datagrams for a
port unless the datagram socket has already been assigned the vmci transport
and the port bound to the underlying VMCI device (see below for more details).

I see.

The transport is assgined when sending every packet and
receiving every packet on dgram sockets.

Is the intent that the same datagram socket can be used for sending packets both
In the host to guest, and the guest to directions?

Nope. One datagram socket will only send packets to one direction, either to the
host or to the guest. My above description is wrong. When sending packets, the
transport is assigned with the first packet (with auto_bind).

I'm not sure this is right.
The auto_bind on the first packet should only assign a local port to the
socket, but does not affect the transport to be used.

A user could send one packet to the nested guest and another to the host
using the same socket, or am I wrong?

OK. I think you are right.


The problem is when receiving packets. The listener can bind to the
VMADDR_CID_ANY
address. Then it is unclear which transport we should use. For stream
sockets, there will be a new socket for each connection, and transport
can be decided
at that time. For datagram sockets, I am not sure how to handle that.

yes, this I think is the main problem, but maybe the sender one is even
more complicated.

Maybe we should remove the 1:1 association we have now between vsk and
transport.

Yes, I thought about that too. One idea is to define two transports in vsk.
For sending pkt, we can pick the right transport when we get the packet, like
in virtio_transport_send_pkt_info(). For receiving pkts, we have to check
and call both
transports dequeue callbacks if the local cid is CID_ANY.

At least for DGRAM, for connected sockets I think the association makes
sense.

Yeah. For a connected socket, we will only use one transport.

For VMCI, does the same transport can be used for both receiving from
host and from
the guest?

Yes, they're registered at different times, but it's the same transport.


For virtio, the h2g and g2h transports are different,, so we have to
choose
one of them. My original thought is to wait until the first packet
arrives.

Another idea is that we always bind to host addr and use h2g
transport because I think that might
be more common. If a listener wants to recv packets from the host, then
it
should bind to the guest addr instead of CID_ANY.

Yes, I remember we discussed this idea, this would simplify the
receiving, but there is still the issue of a user wanting to receive
packets from both the nested guest and the host.

OK. Agree.

Any other suggestions?


I think one solution could be to remove the 1:1 association between
DGRAM socket and transport.

IIUC VMCI creates a skb for each received packet and queues it through
sk_receive_skb() directly in the struct sock.

Then the .dgram_dequeue() callback dequeues them using
skb_recv_datagram().

We can move these parts in the vsock core, and create some helpers to
allow the transports to enqueue received DGRAM packets in the same way
(and with the same format) directly in the struct sock.


I agree to use skbs (and move them to vscok core). We have another use case
which will need to use skb. But I am not sure how this helps with 

Re: Re: [RFC] vsock: add multiple transports support for dgram

2021-04-12 Thread Jiang Wang .
On Mon, Apr 12, 2021 at 7:04 AM Stefano Garzarella  wrote:
>
> Hi Jiang,
> thanks for re-starting the multi-transport support for dgram!

No problem.

> On Wed, Apr 07, 2021 at 11:25:36AM -0700, Jiang Wang . wrote:
> >On Wed, Apr 7, 2021 at 2:51 AM Jorgen Hansen  wrote:
> >>
> >>
> >> > On 6 Apr 2021, at 20:31, Jiang Wang  wrote:
> >> >
> >> > From: "jiang.wang" 
> >> >
> >> > Currently, only VMCI supports dgram sockets. To supported
> >> > nested VM use case, this patch removes transport_dgram and
> >> > uses transport_g2h and transport_h2g for dgram too.
>
> I agree on this part, I think that's the direction to go.
> transport_dgram was added as a shortcut.

Got it.

> >>
> >> Could you provide some background for introducing this change - are you
> >> looking at introducing datagrams for a different transport? VMCI datagrams
> >> already support the nested use case,
> >
> >Yes, I am trying to introduce datagram for virtio transport. I wrote a
> >spec patch for
> >virtio dgram support and also a code patch, but the code patch is still WIP.
> >When I wrote this commit message, I was thinking nested VM is the same as
> >multiple transport support. But now, I realize they are different.
> >Nested VMs may use
> >the same virtualization layer(KVM on KVM), or different virtualization layers
> >(KVM on ESXi). Thanks for letting me know that VMCI already supported nested
> >use cases. I think you mean VMCI on VMCI, right?
> >
> >> but if we need to support multiple datagram
> >> transports we need to rework how we administer port assignment for 
> >> datagrams.
> >> One specific issue is that the vmci transport won’t receive any datagrams 
> >> for a
> >> port unless the datagram socket has already been assigned the vmci 
> >> transport
> >> and the port bound to the underlying VMCI device (see below for more 
> >> details).
> >>
> >I see.
> >
> >> > The transport is assgined when sending every packet and
> >> > receiving every packet on dgram sockets.
> >>
> >> Is the intent that the same datagram socket can be used for sending 
> >> packets both
> >> In the host to guest, and the guest to directions?
> >
> >Nope. One datagram socket will only send packets to one direction, either to 
> >the
> >host or to the guest. My above description is wrong. When sending packets, 
> >the
> >transport is assigned with the first packet (with auto_bind).
>
> I'm not sure this is right.
> The auto_bind on the first packet should only assign a local port to the
> socket, but does not affect the transport to be used.
>
> A user could send one packet to the nested guest and another to the host
> using the same socket, or am I wrong?

OK. I think you are right.

> >
> >The problem is when receiving packets. The listener can bind to the
> >VMADDR_CID_ANY
> >address. Then it is unclear which transport we should use. For stream
> >sockets, there will be a new socket for each connection, and transport
> >can be decided
> >at that time. For datagram sockets, I am not sure how to handle that.
>
> yes, this I think is the main problem, but maybe the sender one is even
> more complicated.
>
> Maybe we should remove the 1:1 association we have now between vsk and
> transport.

Yes, I thought about that too. One idea is to define two transports in vsk.
For sending pkt, we can pick the right transport when we get the packet, like
in virtio_transport_send_pkt_info(). For receiving pkts, we have to check
and call both
transports dequeue callbacks if the local cid is CID_ANY.

> At least for DGRAM, for connected sockets I think the association makes
> sense.

Yeah. For a connected socket, we will only use one transport.

> >For VMCI, does the same transport can be used for both receiving from
> >host and from
> >the guest?
>
> Yes, they're registered at different times, but it's the same transport.
>
> >
> >For virtio, the h2g and g2h transports are different,, so we have to
> >choose
> >one of them. My original thought is to wait until the first packet
> >arrives.
> >
> >Another idea is that we always bind to host addr and use h2g
> >transport because I think that might
> >be more common. If a listener wants to recv packets from the host, then
> >it
> >should bind to the guest addr instead of CID_ANY.
>
> Yes, I remember we discussed this idea, this would simplify the
> receiving, but there is still the issue of a user wanting to receive
> packets from both the nested guest and the host.

OK. Agree.

> >Any other suggestions?
> >
>
> I think one solution could be to remove the 1:1 association between
> DGRAM socket and transport.
>
> IIUC VMCI creates a skb for each received packet and queues it through
> sk_receive_skb() directly in the struct sock.
>
> Then the .dgram_dequeue() callback dequeues them using
> skb_recv_datagram().
>
> We can move these parts in the vsock core, and create some helpers to
> allow the transports to enqueue received DGRAM packets in the same way
> (and with the same format) directly in the struct sock.
>

I agree 

Re: [External] Re: [RFC] vsock: add multiple transports support for dgram

2021-04-12 Thread Stefano Garzarella

Hi Jiang,
thanks for re-starting the multi-transport support for dgram!

On Wed, Apr 07, 2021 at 11:25:36AM -0700, Jiang Wang . wrote:

On Wed, Apr 7, 2021 at 2:51 AM Jorgen Hansen  wrote:



> On 6 Apr 2021, at 20:31, Jiang Wang  wrote:
>
> From: "jiang.wang" 
>
> Currently, only VMCI supports dgram sockets. To supported
> nested VM use case, this patch removes transport_dgram and
> uses transport_g2h and transport_h2g for dgram too.


I agree on this part, I think that's the direction to go.  
transport_dgram was added as a shortcut.




Could you provide some background for introducing this change - are you
looking at introducing datagrams for a different transport? VMCI datagrams
already support the nested use case,


Yes, I am trying to introduce datagram for virtio transport. I wrote a
spec patch for
virtio dgram support and also a code patch, but the code patch is still WIP.
When I wrote this commit message, I was thinking nested VM is the same as
multiple transport support. But now, I realize they are different.
Nested VMs may use
the same virtualization layer(KVM on KVM), or different virtualization layers
(KVM on ESXi). Thanks for letting me know that VMCI already supported nested
use cases. I think you mean VMCI on VMCI, right?


but if we need to support multiple datagram
transports we need to rework how we administer port assignment for datagrams.
One specific issue is that the vmci transport won’t receive any datagrams for a
port unless the datagram socket has already been assigned the vmci transport
and the port bound to the underlying VMCI device (see below for more details).


I see.


> The transport is assgined when sending every packet and
> receiving every packet on dgram sockets.

Is the intent that the same datagram socket can be used for sending packets both
In the host to guest, and the guest to directions?


Nope. One datagram socket will only send packets to one direction, either to the
host or to the guest. My above description is wrong. When sending packets, the
transport is assigned with the first packet (with auto_bind).


I'm not sure this is right.
The auto_bind on the first packet should only assign a local port to the 
socket, but does not affect the transport to be used.


A user could send one packet to the nested guest and another to the host 
using the same socket, or am I wrong?




The problem is when receiving packets. The listener can bind to the
VMADDR_CID_ANY
address. Then it is unclear which transport we should use. For stream
sockets, there will be a new socket for each connection, and transport
can be decided
at that time. For datagram sockets, I am not sure how to handle that.


yes, this I think is the main problem, but maybe the sender one is even 
more complicated.


Maybe we should remove the 1:1 association we have now between vsk and 
transport.


At least for DGRAM, for connected sockets I think the association makes 
sense.



For VMCI, does the same transport can be used for both receiving from
host and from
the guest?


Yes, they're registered at different times, but it's the same transport.



For virtio, the h2g and g2h transports are different,, so we have to 
choose
one of them. My original thought is to wait until the first packet 
arrives.


Another idea is that we always bind to host addr and use h2g
transport because I think that might
be more common. If a listener wants to recv packets from the host, then 
it

should bind to the guest addr instead of CID_ANY.


Yes, I remember we discussed this idea, this would simplify the 
receiving, but there is still the issue of a user wanting to receive 
packets from both the nested guest and the host.



Any other suggestions?



I think one solution could be to remove the 1:1 association between 
DGRAM socket and transport.


IIUC VMCI creates a skb for each received packet and queues it through 
sk_receive_skb() directly in the struct sock.


Then the .dgram_dequeue() callback dequeues them using 
skb_recv_datagram().


We can move these parts in the vsock core, and create some helpers to 
allow the transports to enqueue received DGRAM packets in the same way 
(and with the same format) directly in the struct sock.



What do you think?

Thanks,
Stefano

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [External] Re: [RFC] vsock: add multiple transports support for dgram

2021-04-07 Thread Jiang Wang .
On Wed, Apr 7, 2021 at 2:51 AM Jorgen Hansen  wrote:
>
>
> > On 6 Apr 2021, at 20:31, Jiang Wang  wrote:
> >
> > From: "jiang.wang" 
> >
> > Currently, only VMCI supports dgram sockets. To supported
> > nested VM use case, this patch removes transport_dgram and
> > uses transport_g2h and transport_h2g for dgram too.
>
> Could you provide some background for introducing this change - are you
> looking at introducing datagrams for a different transport? VMCI datagrams
> already support the nested use case,

Yes, I am trying to introduce datagram for virtio transport. I wrote a
spec patch for
virtio dgram support and also a code patch, but the code patch is still WIP.
When I wrote this commit message, I was thinking nested VM is the same as
multiple transport support. But now, I realize they are different.
Nested VMs may use
the same virtualization layer(KVM on KVM), or different virtualization layers
(KVM on ESXi). Thanks for letting me know that VMCI already supported nested
use cases. I think you mean VMCI on VMCI, right?

> but if we need to support multiple datagram
> transports we need to rework how we administer port assignment for datagrams.
> One specific issue is that the vmci transport won’t receive any datagrams for 
> a
> port unless the datagram socket has already been assigned the vmci transport
> and the port bound to the underlying VMCI device (see below for more details).
>
I see.

> > The transport is assgined when sending every packet and
> > receiving every packet on dgram sockets.
>
> Is the intent that the same datagram socket can be used for sending packets 
> both
> In the host to guest, and the guest to directions?

Nope. One datagram socket will only send packets to one direction, either to the
host or to the guest. My above description is wrong. When sending packets, the
transport is assigned with the first packet (with auto_bind).

The problem is when receiving packets. The listener can bind to the
VMADDR_CID_ANY
address. Then it is unclear which transport we should use. For stream
sockets, there will be a new socket for each connection, and transport
can be decided
at that time. For datagram sockets, I am not sure how to handle that.
For VMCI, does the same transport can be used for both receiving from
host and from
the guest?

For virtio, the h2g and g2h transports are different,, so we have to choose
one of them. My original thought is to wait until the first packet arrives.

Another idea is that we always bind to host addr and use h2g
transport because I think that might
be more common. If a listener wants to recv packets from the host, then it
should bind to the guest addr instead of CID_ANY.
Any other suggestions?

> Also, as mentioned above the vSocket datagram needs to be bound to a port in 
> the
> VMCI transport before we can receive any datagrams on that port. This means 
> that
> vmci_transport_recv_dgram_cb won’t be called unless it is already associated 
> with
> a socket as the transport, and therefore we can’t delay the transport 
> assignment to
> that point.

Got it. Thanks. Please see the above replies.

>
> > Signed-off-by: Jiang Wang 
> > ---
> > This patch is not tested. I don't have a VMWare testing
> > environment. Could someone help me to test it?
> >
> > include/net/af_vsock.h |  2 --
> > net/vmw_vsock/af_vsock.c   | 63 
> > +-
> > net/vmw_vsock/vmci_transport.c | 20 +-
> > 3 files changed, 45 insertions(+), 40 deletions(-)
> >
> > diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
> > index b1c717286993..aba241e0d202 100644
> > --- a/include/net/af_vsock.h
> > +++ b/include/net/af_vsock.h
> > @@ -96,8 +96,6 @@ struct vsock_transport_send_notify_data {
> > #define VSOCK_TRANSPORT_F_H2G 0x0001
> > /* Transport provides guest->host communication */
> > #define VSOCK_TRANSPORT_F_G2H 0x0002
> > -/* Transport provides DGRAM communication */
> > -#define VSOCK_TRANSPORT_F_DGRAM  0x0004
> > /* Transport provides local (loopback) communication */
> > #define VSOCK_TRANSPORT_F_LOCAL   0x0008
> >
> > diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
> > index 92a72f0e0d94..7739ab2521a1 100644
> > --- a/net/vmw_vsock/af_vsock.c
> > +++ b/net/vmw_vsock/af_vsock.c
> > @@ -449,8 +449,6 @@ int vsock_assign_transport(struct vsock_sock *vsk, 
> > struct vsock_sock *psk)
> >
> >   switch (sk->sk_type) {
> >   case SOCK_DGRAM:
> > - new_transport = transport_dgram;
> > - break;
> >   case SOCK_STREAM:
> >   if (vsock_use_local_transport(remote_cid))
> >   new_transport = transport_local;
> > @@ -1096,7 +1094,6 @@ static int vsock_dgram_sendmsg(struct socket *sock, 
> > struct msghdr *msg,
> >   struct sock *sk;
> >   struct vsock_sock *vsk;
> >   struct sockaddr_vm *remote_addr;
> > - const struct vsock_transport *transport;
> >
> >   if 

Re: [RFC] vsock: add multiple transports support for dgram

2021-04-07 Thread Jorgen Hansen

> On 6 Apr 2021, at 20:31, Jiang Wang  wrote:
> 
> From: "jiang.wang" 
> 
> Currently, only VMCI supports dgram sockets. To supported
> nested VM use case, this patch removes transport_dgram and
> uses transport_g2h and transport_h2g for dgram too.

Could you provide some background for introducing this change - are you
looking at introducing datagrams for a different transport? VMCI datagrams
already support the nested use case, but if we need to support multiple datagram
transports we need to rework how we administer port assignment for datagrams.
One specific issue is that the vmci transport won’t receive any datagrams for a
port unless the datagram socket has already been assigned the vmci transport
and the port bound to the underlying VMCI device (see below for more details).


> The transport is assgined when sending every packet and
> receiving every packet on dgram sockets.

Is the intent that the same datagram socket can be used for sending packets both
In the host to guest, and the guest to directions? 

Also, as mentioned above the vSocket datagram needs to be bound to a port in the
VMCI transport before we can receive any datagrams on that port. This means that
vmci_transport_recv_dgram_cb won’t be called unless it is already associated 
with
a socket as the transport, and therefore we can’t delay the transport 
assignment to
that point.


> Signed-off-by: Jiang Wang 
> ---
> This patch is not tested. I don't have a VMWare testing
> environment. Could someone help me to test it? 
> 
> include/net/af_vsock.h |  2 --
> net/vmw_vsock/af_vsock.c   | 63 +-
> net/vmw_vsock/vmci_transport.c | 20 +-
> 3 files changed, 45 insertions(+), 40 deletions(-)
> 
> diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
> index b1c717286993..aba241e0d202 100644
> --- a/include/net/af_vsock.h
> +++ b/include/net/af_vsock.h
> @@ -96,8 +96,6 @@ struct vsock_transport_send_notify_data {
> #define VSOCK_TRANSPORT_F_H2G 0x0001
> /* Transport provides guest->host communication */
> #define VSOCK_TRANSPORT_F_G2H 0x0002
> -/* Transport provides DGRAM communication */
> -#define VSOCK_TRANSPORT_F_DGRAM  0x0004
> /* Transport provides local (loopback) communication */
> #define VSOCK_TRANSPORT_F_LOCAL   0x0008
> 
> diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
> index 92a72f0e0d94..7739ab2521a1 100644
> --- a/net/vmw_vsock/af_vsock.c
> +++ b/net/vmw_vsock/af_vsock.c
> @@ -449,8 +449,6 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct 
> vsock_sock *psk)
> 
>   switch (sk->sk_type) {
>   case SOCK_DGRAM:
> - new_transport = transport_dgram;
> - break;
>   case SOCK_STREAM:
>   if (vsock_use_local_transport(remote_cid))
>   new_transport = transport_local;
> @@ -1096,7 +1094,6 @@ static int vsock_dgram_sendmsg(struct socket *sock, 
> struct msghdr *msg,
>   struct sock *sk;
>   struct vsock_sock *vsk;
>   struct sockaddr_vm *remote_addr;
> - const struct vsock_transport *transport;
> 
>   if (msg->msg_flags & MSG_OOB)
>   return -EOPNOTSUPP;
> @@ -1108,25 +1105,30 @@ static int vsock_dgram_sendmsg(struct socket *sock, 
> struct msghdr *msg,
> 
>   lock_sock(sk);
> 
> - transport = vsk->transport;
> -
>   err = vsock_auto_bind(vsk);
>   if (err)
>   goto out;
> 
> -
>   /* If the provided message contains an address, use that.  Otherwise
>* fall back on the socket's remote handle (if it has been connected).
>*/
>   if (msg->msg_name &&
>   vsock_addr_cast(msg->msg_name, msg->msg_namelen,
>   _addr) == 0) {
> + vsock_addr_init(>remote_addr, remote_addr->svm_cid,
> + remote_addr->svm_port);
> +
> + err = vsock_assign_transport(vsk, NULL);
> + if (err) {
> + err = -EINVAL;
> + goto out;
> + }
> +
>   /* Ensure this address is of the right type and is a valid
>* destination.
>*/
> -
>   if (remote_addr->svm_cid == VMADDR_CID_ANY)
> - remote_addr->svm_cid = transport->get_local_cid();
> + remote_addr->svm_cid = vsk->transport->get_local_cid();
> 
>   if (!vsock_addr_bound(remote_addr)) {
>   err = -EINVAL;
> @@ -1136,7 +1138,7 @@ static int vsock_dgram_sendmsg(struct socket *sock, 
> struct msghdr *msg,
>   remote_addr = >remote_addr;
> 
>   if (remote_addr->svm_cid == VMADDR_CID_ANY)
> - remote_addr->svm_cid = transport->get_local_cid();
> + remote_addr->svm_cid = vsk->transport->get_local_cid();
> 
>   /* XXX Should connect() or this function ensure remote_addr is
>*