Michael S. Tsirkin wrote:
- Do we expect ULPs to let CMA mange the QPs, or do it themselves?

The CMA manages the QP states only if the user calls rdma_create_qp(). If this call is not made, the user must perform QP state transitions themselves. In general, I would expect most users to call rdma_create_qp(). The option is there mainly to support userspace QPs.

- When cma manages QP state, there doesnt seem to exist an option
  to post receive WQEs when qp is in INIT state.
  This is required at least for SDP if SDP's to use.

The QP is in the INIT state after rdma_create_qp() is called. Receives may be posted at that time.

- CMA does not seem to do anything with the path static rate it gets
  from SA. Am I missing the place where it does do it?

What are you wanting it to do with it? It's possible that what you're looking for is done by the IB CM.

- Any chance IPv6 will get supported soon?

Only if someone submits a patch for it. I don't see having time to do this myself for at least a few months.

- backlog parameter
        - Most code handling backlog seems to be in ucma -
          shouldnt this be generic to cma?

Backlog doesn't make much sense when the CMA is used over the IB CM, since it's a direct callback model. I looked at adding backlog as a parameter to the IB CM, but couldn't come up with a decent implementation for what to do with it. The uCMA is the only location where any queuing of requests is actually done.

        - It seems that in ucma backlog is checked when a connection request
          arrives. However this is not how TCP handles backlog,
          so socket apps being ported to CMA might hit a problem.

Yes - there are differences between what backlog can mean on sockets versus an RDMA interface. We can't create the connection for the user on an RDMA interface, so the best that I could come up with is to queue requests.
        
        Basically TCP uses a two stage backlog to defend against SYN attacks.
        When a SYN is received a small amount of state is kept until the full
        handshake is completed, at which point a full socket is created and
        queued onto the listen sockets accept queue. The second stage uses the
        listen() backlog parameter to manage the accept queue. The first stage
        queue size is managed using a sysctl, (net.pv4.tcp_max_syn_backlog)
        which on a lot of systems defaults to 1024.

          So I think ideally CMA would do the same.

I think that I need a specific example (i.e. a patch) to see how you would treat backlog differently.

- Sean
_______________________________________________
openib-general mailing list
[email protected]
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to