Spencer Shepler wrote:
<SNIP>
Today, the NFS client and server, in support of RDMA transports
like Infiniband, will start with TCP connections and then
determine if RDMA is available on the interface used for
the connection. Most of this work is done at user-level
with a smaller set of code in the kernel for the final
setup. It would be helpful to enable the NFS client and
server to do this transition completely within the kernel.
This is a nice to have; not a requirement.
It might be possible to do, I will look into it. :)
The kernel RPC interfaces use the streams timer mechanism to
timeout and close idle connections; again, a nice to have
but not a hard requirement.
So far there have been not plans for adding any timer mechanism to the
interface itself. The user would
have to rely on timeout(9F) for that functionality.
I should also mention that the
NFS server changes the receive buffer size/window size to
stop-down the client when it is not receiving data as quickly
as it is sending requests. Seems like that will be covered
with what you propose.
Yes, you will be able to modify the buffer size via ksock_setsockopt().
Finally, what additional thoughts do you have about the
event notification mechanism. Will it deliver multiple
events, simultaneously for a particular socket?
That is the current design; there is no mechanism in the interface that
synchronize the events, however, at least for TCP, I suppose that those
scenarios would be uncommon since squeues should ensure synchronization.
Or will
it wait for one event delivery to be complete before
delivering the next. Would it be possible chain or provide
a list of events?
It could be possible, but the consumer can have this type of behavior by
adding some additional logic to the callback functions. So for that
reason I do not think the additional complexity is needed.
Thank you very much,
Anders
_______________________________________________
networking-discuss mailing list
[email protected]