> -----Original Message-----
> From: Parthasarathy Bhuvaragan
> Sent: Friday, January 27, 2017 3:02 AM
> To: Jon Maloy <jon.ma...@ericsson.com>; tipc-
> discuss...@lists.sourceforge.net; Ying Xue <ying....@windriver.com>
> Cc: erik.hu...@gmail.com
> Subject: Re: [PATCH net-next v2 2/2] tipc: allow rdm/dgram socketpairs
> 
> On 01/26/2017 03:45 PM, Jon Maloy wrote:
> > I think you could do even better. Why not set socket state to
> TIPC_ESTABLISHED, and along with a couple of other tweeks you will have a
> full-fledged connection, with flow control and peer crash detection?
> But thats what we do by calling tipc_sk_finish_conn(). 

Ok, I missed that.  Then it is ok, as long as the hard coded zero is fixed.

///jon

>This patch has is not
> correct, I need to pass the onode as the peer_node in that api instead of 0
> and that should do all of the above. Or were you thinking of something else?


> /Partha
> >
> > ///jon
> >
> >
> >> -----Original Message-----
> >> From: Parthasarathy Bhuvaragan
> >> Sent: Thursday, 26 January, 2017 02:47
> >> To: tipc-discussion@lists.sourceforge.net; Jon Maloy
> >> <jon.ma...@ericsson.com>; Ying Xue <ying....@windriver.com>
> >> Cc: erik.hu...@gmail.com
> >> Subject: [PATCH net-next v2 2/2] tipc: allow rdm/dgram socketpairs
> >>
> >> From: Erik Hugne <erik.hu...@gmail.com>
> >>
> >> for socketpairs using connectionless transport, we cache the
> >> respective node local TIPC portid to use in subsequent calls to
> >> send() in the socket's private data.
> >>
> >> Signed-off-by: Erik Hugne <erik.hu...@gmail.com>
> >> Signed-off-by: Parthasarathy Bhuvaragan
> >> <parthasarathy.bhuvara...@ericsson.com>
> >>
> >> ---
> >> v2: node is set to own_addr() instead of 0.
> >> ---
> >>  net/tipc/socket.c | 16 ++++++++++++++--
> >>  1 file changed, 14 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/net/tipc/socket.c b/net/tipc/socket.c index
> >> eafc9569e679..199e82307491 100644
> >> --- a/net/tipc/socket.c
> >> +++ b/net/tipc/socket.c
> >> @@ -2503,6 +2503,18 @@ static int tipc_socketpair(struct socket
> >> *sock1, struct socket *sock2)  {
> >>       struct tipc_sock *tsk2 = tipc_sk(sock2->sk);
> >>       struct tipc_sock *tsk1 = tipc_sk(sock1->sk);
> >> +     u32 onode = tipc_own_addr(sock_net(sock1->sk));
> >> +
> >> +     tsk1->peer.family = AF_TIPC;
> >> +     tsk1->peer.addrtype = TIPC_ADDR_ID;
> >> +     tsk1->peer.scope = TIPC_NODE_SCOPE;
> >> +     tsk1->peer.addr.id.ref = tsk2->portid;
> >> +     tsk1->peer.addr.id.node = onode;
> >> +     tsk2->peer.family = AF_TIPC;
> >> +     tsk2->peer.addrtype = TIPC_ADDR_ID;
> >> +     tsk2->peer.scope = TIPC_NODE_SCOPE;
> >> +     tsk2->peer.addr.id.ref = tsk1->portid;
> >> +     tsk2->peer.addr.id.node = onode;
> >>
> >>       tipc_sk_finish_conn(tsk1, tsk2->portid, 0);
> >>       tipc_sk_finish_conn(tsk2, tsk1->portid, 0); @@ -2517,7 +2529,7
> >> @@ static const struct proto_ops msg_ops = {
> >>       .release        = tipc_release,
> >>       .bind           = tipc_bind,
> >>       .connect        = tipc_connect,
> >> -     .socketpair     = sock_no_socketpair,
> >> +     .socketpair     = tipc_socketpair,
> >>       .accept         = sock_no_accept,
> >>       .getname        = tipc_getname,
> >>       .poll           = tipc_poll,
> >> @@ -2559,7 +2571,7 @@ static const struct proto_ops stream_ops = {
> >>       .release        = tipc_release,
> >>       .bind           = tipc_bind,
> >>       .connect        = tipc_connect,
> >> -     .socketpair     = sock_no_socketpair,
> >> +     .socketpair     = tipc_socketpair,
> >>       .accept         = tipc_accept,
> >>       .getname        = tipc_getname,
> >>       .poll           = tipc_poll,
> >> --
> >> 2.1.4
> >

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Reply via email to