> 1. We are currently using the sockfs interface in the kernel to perform 
 > our various network tasks, similar to the approach used by the Solaris 
 > iSCSI initiator and CIFS server.  New connections are handled by 
 > soaccept and then we use sosendmsg/sorecvmsg to transfer data on the new 
 > iSCSI connection.  It's been suggested to me that it would be better to 
 > accept and manage connections in user-space.  Realizing that ultimately 
 > we need to be able to use the resulting connection in the kernel I'd 
 > like to understand more about this.  Is it possible to accept a 
 > connection in user-space and then operate on that connection in a kernel 
 > driver?

This has traditionally been done by interposing a STREAMS module between
the transport and the socket head and speaking TPI -- e.g., you can look
at what in.telnetd and in.rlogind do with telmod/rlmod/logindmux.  There
are similar dances with NFS and rpcmod.  That said, we've been moving away
from that model, and projects like Volo are designed assuming that
interposers are unusual (though there has been talk of providing a hook
API to allow similar functionality to be implemented).

 > This would require us to add an associated user-space daemon that we
 > don't currently have but I'm not opposed to that if it's the correct
 > way to handle things.  If this is a viable approach, what are the
 > advantages?

An interesting set of questions.  Most of the existing cases have come at
this from the other way around: they've had a userland daemon and wanted
to speed it up, so the performance-critical paths got moved into the
kernel (though NFS client support is all in the kernel).  The main
advantages I see with userland are (a) visibility with traditional tools
(e.g., netstat) and (b) reducing the odds that a bug will lead to a
significant security breach.

--
meem
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to