This is a quick outline of the changes required to
make ib_verbs.h transport neutral. Some of the changes
were actually required anyway to fully support IB 1.2.



Most verbs are nominally unchanged. The differences are
in the exact data formats and in some semantic details.

These changes will require wide-spread, but minor, changes 
to a lot of existing code. No structural or design changes
are required by these changes.


structs:
typically "struct ib_xyz" is transformed as follows:

        struct ib_xyz {
                /* Only IB specific fields remain */
                /* In some cases fields have been split, because
             * iWARP allows two things to vary that IB had
             * locked together. SGE limits are the primary
             * example of this. iWARP can have different
                 * limits on SGE size for each type of message
                 */
        };

        struct iwarp_xyz {
                /* equivalent iWARP specific fields */
        };

        struct rdma_xyz {
                /* Transport neutral fields. Typically
                 * a subset of what was in struct ib_xyz before
                 */
                union {
                        struct ib_xyz ib;
                        struct iwarp_xyz iwarp;
                } xpt;
        };

enums:
Transport neutral values are first and start RDMA_ That is
followed by IB specific values starting with IB_ and finally
iWARP specific values starting with IWARP_

Fields are consisered 'transport neutral' only if there is a
near total match on their semantics. This results in some
cases of seemingly redundant error codes or event statuses,
but the intent is to match what is documented in existing
transport specific verb specifications, and not create
questions about border cases. Creating transport neutral
errors and states is to be left to the DAPl/IT-API layer.

Some specific issues that need discussion that I have not yet
proposed a solution to:

        a) What is an iWARP "port"? It probably matches an IT-API
                spigot, which probably matches an Ethernet device.
                But what is the preferred representation? The primary
                IP address? the device name? A pointer to a kernel
                structure?
        b) Where should page vs. block mode be set?
        c) Support for shared memory regions. Should we simply
           match RNIC-PI here and make the consumer and verbs 
           provider responsible for this? Or should there be
         device independent support?
        d) Connection Management, and any polymorphism to the
           interface will be discussed later.
        e) iWARP does not have address handles, and "UD" service
           is provided via UDP which does not typically use QPs
           and in particular does not use the same CQs as RC.
           This must be resolved.

I am assuming that returning a "not supported" error for irrelevant
verbs is an acceptable burnden for all providers.
iWARP providers do not support "query_gid" for example.

Work request formats to support Bind, Local Invalidate and Fast
Memory Register have been added. IB vendors would have to reject
them if they do not support them as normal work requests.
An mr_allocate verb is added to create an empty memory region that
can be fast-registered.

A new verb is proposed to create a Type (narrow) Memory Window.
This could also be done by adding a type to the existing create
memory window verb.

New verbs are added for physical registration with fixed page sizes
to match the RDMAC verbs. Cross support by interpretation of the other
format would have to be discussed, it probably should be encouraged
except when it requires creation of intermediate data structures.

More text is still needed to deal with semantics associated with
the interfaces:
        Different meaning of RDMA Write completion for iWARP.

        IN/OUT conventions on LKEY/RKEY to deal with "consumer
        owned portion"

        Error conditions that may be reported differently, 
        such as when you supply a bad remote address.

These are mostly for the applications themselves, they don't really
impact the verb and DAPL/IT-API code much.

Further postings will deal with remaining structural differences
between the gen2 verbs and RNIC-PI and RDMAC verbs.

There will also be follow up postings for each resource class
(rdma device, PD, MR, MW, QP, CQ, etc.) discussing what was
classified as transport neutral versus transport dependent
and why.








And now the diff file in-line
--------------------------
Index: ib_verbs.h
===================================================================
--- ib_verbs.h  (revision 2744)
+++ ib_verbs.h  (working copy)
@@ -61,24 +61,44 @@
        IB_NODE_ROUTER
 };
 
-enum ib_device_cap_flags {
-       IB_DEVICE_RESIZE_MAX_WR         = 1,
-       IB_DEVICE_BAD_PKEY_CNTR         = (1<<1),
-       IB_DEVICE_BAD_QKEY_CNTR         = (1<<2),
-       IB_DEVICE_RAW_MULTI             = (1<<3),
-       IB_DEVICE_AUTO_PATH_MIG         = (1<<4),
-       IB_DEVICE_CHANGE_PHY_PORT       = (1<<5),
-       IB_DEVICE_UD_AV_PORT_ENFORCE    = (1<<6),
-       IB_DEVICE_CURR_QP_STATE_MOD     = (1<<7),
-       IB_DEVICE_SHUTDOWN_PORT         = (1<<8),
-       IB_DEVICE_INIT_TYPE             = (1<<9),
-       IB_DEVICE_PORT_ACTIVE_EVENT     = (1<<10),
-       IB_DEVICE_SYS_IMAGE_GUID        = (1<<11),
-       IB_DEVICE_RC_RNR_NAK_GEN        = (1<<12),
-       IB_DEVICE_SRQ_RESIZE            = (1<<13),
-       IB_DEVICE_N_NOTIFY_CQ           = (1<<14),
+enum rdma_device_cap_flags {
+       RDMA_DEVICE_RESIZE_MAX_WR       = 1,
+       RDMA_DEVICE_SRQ_RESIZE          = (1<<1),
+       RDMA_DEVICE_QP_RESIZE           = (1<<2),
+       RDMA_DEVICE_EXT_SRQ             = (1<<3),
+       RDMA_DEVICE_EXT_BMM             = (1<<4),
+       RDMA_DEVICE_EXT_ZBVA            = (1<<5),
+       RDMA_DEVICE_EXT_LIF             = (1<<6),
+
+       /* IB specific capabilities */
+       IB_DEVICE_BAD_PKEY_CNTR         = (1<<6),
+       IB_DEVICE_BAD_QKEY_CNTR         = (1<<7),
+       IB_DEVICE_RAW_MULTI             = (1<<8),
+       IB_DEVICE_AUTO_PATH_MIG         = (1<<9),
+       IB_DEVICE_CHANGE_PHY_PORT       = (1<<10),
+       IB_DEVICE_UD_AV_PORT_ENFORCE    = (1<<11),
+       IB_DEVICE_CURR_QP_STATE_MOD     = (1<<12),
+       IB_DEVICE_SHUTDOWN_PORT         = (1<<13),
+       IB_DEVICE_INIT_TYPE             = (1<<14),
+       IB_DEVICE_PORT_ACTIVE_EVENT     = (1<<15),
+       IB_DEVICE_SYS_IMAGE_GUID        = (1<<16),
+       IB_DEVICE_RC_RNR_NAK_GEN        = (1<<17),
+       IB_DEVICE_N_NOTIFY_CQ           = (1<<18),
+
+       /* iWARP specific capabilities */
+       IWARP_DEVICE_CQ_OVERFLOW_DETECTED = (1<<19),
+       IWARP_DEVICE_MOD_IRD = (1<<20),
+       IWARP_DEVICE_INC_ORD = (1<<21),
+       IWARP_DEVICE_SRQ_DEQUEUE_ARRIVAL = (1<<22),
+       IWARP_DEVICE_TCP_SUPPORTED = (1<<23),
+       IWARP_DEVICE_SCTP_SUPPORTED = (1<<24),
+       IWARP_DEVICE_IETF_PERMISSIVE = (1<<25),
+       IWARP_DEVICE_PREFER_MARKERS = (1<<26),
+       IWARP_DEVICE_PREFER_CRC = (1<<27),
+       IWARP_DEVICE_IS_IETF = (1<<28)
 };
 
+
 enum ib_atomic_cap {
        IB_ATOMIC_NONE,
        IB_ATOMIC_HCA,
@@ -86,23 +106,9 @@
 };
 
 struct ib_device_attr {
-       u64                     fw_ver;
        u64                     node_guid;
        u64                     sys_image_guid;
-       u64                     max_mr_size;
-       u64                     page_size_cap;
-       u32                     vendor_id;
-       u32                     vendor_part_id;
-       u32                     hw_ver;
-       int                     max_qp;
-       int                     max_qp_wr;
-       int                     device_cap_flags;
-       int                     max_sge;
        int                     max_sge_rd;
-       int                     max_cq;
-       int                     max_cqe;
-       int                     max_mr;
-       int                     max_pd;
        int                     max_qp_rd_atom;
        int                     max_ee_rd_atom;
        int                     max_res_rd_atom;
@@ -120,13 +126,53 @@
        int                     max_ah;
        int                     max_fmr;
        int                     max_map_per_fmr;
-       int                     max_srq;
-       int                     max_srq_wr;
-       int                     max_srq_sge;
        u16                     max_pkeys;
        u8                      local_ca_ack_delay;
 };
 
+struct iwarp_device_attr {
+       int     empty;
+};
+
+enum rdma_type {
+       RDMA_IB,
+       RDMA_IWARP
+};
+
+struct rdma_device_attr {
+       enum rdma_type          rdma_type;
+       u64                     fw_ver;
+       u64                     max_mr_size;
+       u64                     page_size_cap;
+       u32                     vendor_id;
+       u32                     vendor_part_id;
+       u32                     hw_ver;
+       unsigned                max_qp;
+       unsigned                max_qp_wr;
+       enum rdma_device_cap_flag device_cap_flags;
+       unsigned                max_sge;
+       unsigned                max_sge_write;
+       unsigned                max_sge_read;
+       unsigned                max_sge_recv;
+       unsigned                max_cq;
+       unsigned                max_cqe;
+       unsigned                max_mr;
+       unsigned                max_pd;
+       unsigned                max_srq;
+       unsigned                max_srq_wr;
+       unsigned                max_srq_sge;
+       u32                     max_phys_buf_entries;
+       u32                     max_ird;
+       u32                     max_ird_per_qp;
+       u32                     max_ord;
+       u32                     max_ord_per_qp;
+       union {
+               struct ib_device_attr ib;
+               struct iwarp_device_attr iwarp;
+       } xpt;
+};
+
+
 enum ib_mtu {
        IB_MTU_256  = 1,
        IB_MTU_512  = 2,
@@ -221,11 +267,21 @@
        u8                      phys_state;
 };
 
-enum ib_device_modify_flags {
+struct iwarp_port_attr {
+       int     tbd;
+};
+
+union rdma_port_attr {
+       struct ib_port_attr ib;
+       struct iwarp_port_attr iwarp;
+};
+
+
+enum rdma_device_modify_flags {
        IB_DEVICE_MODIFY_SYS_IMAGE_GUID = 1
 };
 
-struct ib_device_modify {
+struct rdma_device_modify {
        u64     sys_image_guid;
 };
 
@@ -241,7 +297,14 @@
        u8      init_type;
 };
 
-enum ib_event_type {
+enum rdma_event_type {
+       RDMA_EVENT_QP_ERR_FROM_SRQ,
+       RDMA_EVENT_SRQ_LIMIT_REACHED,
+       RDMA_EVENT_SRQ_CATASTROPHIC,
+       RDMA_EVENT_QP_RQ_LIMIT_REACHED,
+       RDMA_EVENT_LAST_WQE_REACHED,
+
+       /* IB Specific */
        IB_EVENT_CQ_ERR,
        IB_EVENT_QP_FATAL,
        IB_EVENT_QP_REQ_ERR,
@@ -255,22 +318,60 @@
        IB_EVENT_PORT_ERR,
        IB_EVENT_LID_CHANGE,
        IB_EVENT_PKEY_CHANGE,
-       IB_EVENT_SM_CHANGE
+       IB_EVENT_SM_CHANGAE,
+
+       /* iWARP Specific */
+       IWARP_EVENT_LLP_CLOSE_COMPLETE,
+       IWARP_TERMINATE_MESSAGE_RECEIVED,
+       IWARP_LLP_CONNECTION_RESET,
+       IWARP_LLP,CONNECTION_LOST,
+       IWARP_LLP_INTEGRITY_ERROR_SIZE,
+       IWARP_LLP_INTEGRITY_ERROR_CRC,
+       IWARP_LLP_INTEGRITY_ERROR_BAD_FPDU,
+       IWARP_EVENT_ROE_DDP_VERSION,
+       IWARP_EVENT_ROE_RDMA_VERSION,
+       IWARP_EVENT_ROE_OPCODE,
+       IWARP_EVENT_ROE_DDP_QN,
+       IWARP_EVENT_ROE_RDMA_READ_DISABLED,
+       IWARP_EVENT_ROE_RDMA_WRITE_DISABLED,
+       IWARP_EVENT_ROE_RDMA_READ_INVALID,
+       IWARP_EVENT_ROE_NO_L_BIT,
+       IWARP_EVENT_PE_STAG_QP_MISMATCH,
+       IWARP_EVENT_PE_BOUNDS_VIOLATION,
+       IWARP_EVENT_PE_ACCESS_VIOLATION,
+       IWARP_EVENT_PE_INVALID_PD,
+       IWARP_EVENT_PE_WRAP_ERROR,
+       IWARP_EVENT_BAD_CLOSE,
+       IWARP_EVENT_BAD_LLP_CLOSE,
+       IWARP_EVENT_RQ_PE_INVALID_MSN,
+       IWARP_EVENT_RQ_PE_MSN_GAP,
+       IWARP_EVNET_IRQ_PE_TOOMANY_RDMA_READS,
+       IWARP_EVENT_IRQ_PE_MSN_GAP,
+       IWARP_EVENT_IRQ_PE_INVALID_MSN,
+       IWARP_EVENT_IRQ_PE_INVALID_STAG,
+       IWARP_EVENT_IRQ_PE_BOUNDS_VIOLATION,
+       IWARP_EVENT_IRQ_PE_ACCESS_VIOLATION,
+       IWARP_EVENT_IRQ_PE_INVALID_PD,
+       IWARP_EVNET_IRQ_PE_WRAP_ERROR,
+       IWARP_EVENT_CQ_SQ_ERR,
+       IWARP_EVENT_CQ_RQ_ERR,
+       IWARP_EVENT_CQ_OVERFLOW,
+       IWARP_EVENT_CQ_OP_ERR
 };
 
-struct ib_event {
-       struct ib_device        *device;
+struct rdma_event {
+       struct rdma_device      *device;
        union {
-               struct ib_cq    *cq;
-               struct ib_qp    *qp;
+               struct rdma_cq  *cq;
+               struct rdma_qp  *qp;
                u8              port_num;
        } element;
-       enum ib_event_type      event;
+       enum rdma_event_type    event;
 };
 
-struct ib_event_handler {
-       struct ib_device *device;
-       void            (*handler)(struct ib_event_handler *, struct
ib_event *);
+struct rdma_event_handler {
+       struct rdma_device *device;
+       void            (*handler)(struct rdma_event_handler *, struct
rdma_event *);
        struct list_head  list;
 };
 
@@ -316,13 +417,15 @@
        u8                      port_num;
 };
 
-enum ib_wc_status {
-       IB_WC_SUCCESS,
-       IB_WC_LOC_LEN_ERR,
-       IB_WC_LOC_QP_OP_ERR,
+enum rdma_wc_status {
+       RDMA_WC_SUCCESS,
+       RDMA_WC_LOC_LEN_ERR,
+       RDMA_WC_LOC_QP_OP_ERR,
+       RDMA_WC_LOC_PROT_ERR,
+       RDMA_WC_WR_FLUSH_ERR,
+       
+       /* IB Specific */
        IB_WC_LOC_EEC_OP_ERR,
-       IB_WC_LOC_PROT_ERR,
-       IB_WC_WR_FLUSH_ERR,
        IB_WC_MW_BIND_ERR,
        IB_WC_BAD_RESP_ERR,
        IB_WC_LOC_ACCESS_ERR,
@@ -338,21 +441,49 @@
        IB_WC_INV_EEC_STATE_ERR,
        IB_WC_FATAL_ERR,
        IB_WC_RESP_TIMEOUT_ERR,
-       IB_WC_GENERAL_ERR
+       IB_WC_GENERAL_ERRA,
+
+       /* iWARP Specific */
+       IWARP_WC_INVALID_STAG_ERR,
+       IWARP_WC_WQE_FORMAT_ERR,
+       IWARP_WC_REMOTE_TERM_ERR,
+       IWARP_WC_INVALID_PD_ERR,
+       IWARP_WC_ADDR_WRAP_ERR,
+       IWARP_WC_ZERO_ORD_ERR,
+       IWARP_WC_QP_NOT_PRIV_ERR,
+       IWARP_WC_STAG_NOT_INVALID_ERR,
+       IWARP_WC_PAGE_SIZE_ERR,
+       IWARP_WC_BLOCK_SIZE_ERR,
+       IWARP_WC_PBL_ENTRY_ERR,
+       IWARP_WC_FBO_ERR,
+       IWARP_WC_FMR_LEN_ERR,
+       IWARP_WC_INV_BIND_MR_ERR,
+       IWARP_WC_INV_BIND_MW_ERR,
+       IWARP_WC_HUGE_PAYLOAD_ERR
 };
 
-enum ib_wc_opcode {
-       IB_WC_SEND,
-       IB_WC_RDMA_WRITE,
-       IB_WC_RDMA_READ,
+enum rdma_wc_opcode {
+       RDMA_WC_SEND,
+       RDMA_WC_RDMA_WRITE,
+       RDMA_WC_RDMA_READ,
+       RDMA_WC_BIND1,
+       RDMA_WC_BIND2,
+       RDMA_WC_FMR,
+       RDMA_WC_LOC_INV,
+       
+       /* IB Specific */
        IB_WC_COMP_SWAP,
        IB_WC_FETCH_ADD,
-       IB_WC_BIND_MW,
-/*
+
+       /* iWARP Specific */
+       IWARP_WC_RDMA_READ_LOC_INV,
+/* Transport neutral again:
+ *
  * Set value of IB_WC_RECV so consumers can test if a completion is a
  * receive by testing (opcode & IB_WC_RECV).
  */
        IB_WC_RECV                      = 1 << 7,
+       /* IB Specific */
        IB_WC_RECV_RDMA_WITH_IMM
 };
 
@@ -361,12 +492,20 @@
        IB_WC_WITH_IMM          = (1<<1)
 };
 
-struct ib_wc {
+struct rdma_wc_common {
        u64                     wr_id;
-       enum ib_wc_status       status;
-       enum ib_wc_opcode       opcode;
+       enum rdma_wc_status     status;
+       enum rdma_wc_opcode     opcode;
        u32                     vendor_err;
        u32                     byte_len;
+};
+
+struct iwarp_wc {
+       struct rdma_wc_common   common;
+};
+
+struct ib_wc { 
+       struct rdma_wc_common   common;
        __be32                  imm_data;
        u32                     qp_num;
        u32                     src_qp;
@@ -378,15 +517,24 @@
        u8                      port_num;       /* valid only for DR
SMPs on switches */
 };
 
-enum ib_cq_notify {
-       IB_CQ_SOLICITED,
-       IB_CQ_NEXT_COMP
+union rdma_wc {
+       struct ib_wc    ib;
+       struct iwarp_wc iwarp;
+       struct rdma_wc_common common;
 };
 
-struct ib_qp_cap {
+
+enum rdma_cq_notify {
+       RDMA_CQ_SOLICITED,
+       RDMA_CQ_NEXT_COMP
+};
+
+struct rdma_qp_cap {
        u32     max_send_wr;
        u32     max_recv_wr;
        u32     max_send_sge;
+       u32     max_write_sge;
+       u32     max_read_sge;
        u32     max_recv_sge;
        u32     max_inline_data;
 };
@@ -396,7 +544,11 @@
        IB_SIGNAL_REQ_WR
 };
 
-enum ib_qp_type {
+enum rdma_qp_type {
+       /* There is no such thing as a 'transport neutral' QP,
+         * Any instantiated QP belongs to an actual concrete transport
+        */
+       /* IB Specific */
        /*
         * IB_QPT_SMI and IB_QPT_GSI have to be the first two entries
         * here (and in that order) since the MAD layer uses them as
@@ -409,18 +561,21 @@
        IB_QPT_UC,
        IB_QPT_UD,
        IB_QPT_RAW_IPV6,
-       IB_QPT_RAW_ETY
+       IB_QPT_RAW_ETY,
+
+       /* iWARP Specific */
+       IWARP_QPT_RC
 };
 
-struct ib_qp_init_attr {
-       void                  (*event_handler)(struct ib_event *, void
*);
+struct rdma_qp_init_attr {
+       void                  (*event_handler)(struct rdma_event *, void
*);
        void                   *qp_context;
-       struct ib_cq           *send_cq;
-       struct ib_cq           *recv_cq;
-       struct ib_srq          *srq;
-       struct ib_qp_cap        cap;
-       enum ib_sig_type        sq_sig_type;
-       enum ib_qp_type         qp_type;
+       struct rdma_cq         *send_cq;
+       struct rdma_cq         *recv_cq;
+       struct rdma_srq        *srq;
+       struct rdma_qp_cap      cap;
+       enum rdma_sig_type      sq_sig_type;
+       enum rdma_qp_type       qp_type;
        u8                      port_num; /* special QP types only */
 };
 
@@ -483,14 +638,20 @@
        IB_QP_DEST_QPN                  = (1<<20)
 };
 
-enum ib_qp_state {
-       IB_QPS_RESET,
+enum rdma_qp_state {
+       RDMA_QPS_RESET,
+       RDMA_QPS_RTS,
+       RDMA_QPS_ERR,
+
+       /* IB Specific */
        IB_QPS_INIT,
        IB_QPS_RTR,
-       IB_QPS_RTS,
        IB_QPS_SQD,
        IB_QPS_SQE,
-       IB_QPS_ERR
+
+       /* iWARP Specific */
+       IWARP_QPS_TERMINATE,
+       IWARP_QPS_CLOSING
 };
 
 enum ib_mig_state {
@@ -500,16 +661,13 @@
 };
 
 struct ib_qp_attr {
-       enum ib_qp_state        qp_state;
-       enum ib_qp_state        cur_qp_state;
+       enum rdma_qp_state      cur_qp_state;
        enum ib_mtu             path_mtu;
        enum ib_mig_state       path_mig_state;
        u32                     qkey;
        u32                     rq_psn;
        u32                     sq_psn;
        u32                     dest_qp_num;
-       int                     qp_access_flags;
-       struct ib_qp_cap        cap;
        struct ib_ah_attr       ah_attr;
        struct ib_ah_attr       alt_ah_attr;
        u16                     pkey_index;
@@ -527,35 +685,63 @@
        u8                      alt_timeout;
 };
 
-enum ib_wr_opcode {
-       IB_WR_RDMA_WRITE,
+struct iwarp_qp_attr {
+       int empty;
+};
+
+struct rdma_qp_attr {
+       enum rdma_qp_state      qp_state;
+       int                     qp_access_flags;
+       struct rdma_qp_cap      cap;
+       union {
+               struct ib_qp_attr ib;
+               struct iwarp_qp_attr iwarp;
+       } xpt;
+};
+
+enum rdma_wr_opcode {
+       RDMA_WR_RDMA_WRITE,
+       RDMA_WR_SEND,
+       RDMA_WR_SEND_WITH_INV,
+       RDMA_WR_RDMA_READ,
+       RDMA_WR_BIND1,
+       RDMA_WR_BIND2,
+       RDMA_WR_FMR,
+       RDMA_WR_LOC_INV_MR,
+       RDMA_WR_LOC_INV_MW,
+
+       /* IB Specific */
+       IB_WR_SEND_WITH_IMM,
        IB_WR_RDMA_WRITE_WITH_IMM,
-       IB_WR_SEND,
-       IB_WR_SEND_WITH_IMM,
-       IB_WR_RDMA_READ,
        IB_WR_ATOMIC_CMP_AND_SWP,
-       IB_WR_ATOMIC_FETCH_AND_ADD
+       IB_WR_ATOMIC_FETCH_AND_ADD,
+
+       /* iWARP specific */
+       IWARP_WR_RDMA_READ_INV
 };
 
-enum ib_send_flags {
-       IB_SEND_FENCE           = 1,
-       IB_SEND_SIGNALED        = (1<<1),
-       IB_SEND_SOLICITED       = (1<<2),
-       IB_SEND_INLINE          = (1<<3)
+enum rdma_send_flags {
+       RDMA_SEND_FENCE         = 1,
+       RDMA_SEND_SIGNALED      = (1<<1),
+       RDMA_SEND_SOLICITED     = (1<<2),
+       RDMA_SEND_INLINE        = (1<<3),
+
+       /* iWARP Only */
+       IWARP_SEND_READ_FENCE   = (1<<4)
 };
 
-struct ib_sge {
+struct rdma_sge {
        u64     addr;
        u32     length;
-       u32     lkey;
+       u32     lkey; /* can be rkey for RDMA Read on iWARP */
 };
 
-struct ib_send_wr {
-       struct ib_send_wr      *next;
+struct rdma_send_wr {
+       struct rdma_send_wr     *next;
        u64                     wr_id;
-       struct ib_sge          *sg_list;
+       struct rdma_sge         *sg_list;
        int                     num_sge;
-       enum ib_wr_opcode       opcode;
+       enum rdma_wr_opcode     opcode;
        int                     send_flags;
        __be32                  imm_data;
        union {
@@ -579,22 +765,53 @@
                        u16     pkey_index; /* valid for GSI only */
                        u8      port_num;   /* valid for DR SMPs on
switch only */
                } ud;
+               struct {
+                       struct rdma_mr *mr;
+                       struct rdma_mw *mw;
+                       u32     rkey;
+                       u64     vaddr;
+                       u32     len;
+                       enum rdma_mem_attr      mem_attr;
+               } bind;
+               struct {
+                       struct rdma_mr *mr;
+                       u32     lkey;
+                       u32     rkey;
+               } loc_inv_mr;
+               struct {
+                       struct rdma_mw *mw;
+                       u32     rkey;
+               } loc_inv_mw;
+               struct {
+                       struct rdma_mr *mr;
+                       u32     lkey;
+                       u32     rkey;
+                       u64     *phys_buf_list;
+                       u32     pble_size;
+                       u32     nbufs;
+                       u32     addr_offset;
+               } fmr;
        } wr;
 };
 
-struct ib_recv_wr {
-       struct ib_recv_wr      *next;
+struct rdma_recv_wr {
+       struct rdma_recv_wr     *next;
        u64                     wr_id;
-       struct ib_sge          *sg_list;
-       int                     num_sge;
+       struct rdma_sge         *sg_list;
+       unsigned                num_sge;
 };
 
-enum ib_access_flags {
-       IB_ACCESS_LOCAL_WRITE   = 1,
-       IB_ACCESS_REMOTE_WRITE  = (1<<1),
-       IB_ACCESS_REMOTE_READ   = (1<<2),
-       IB_ACCESS_REMOTE_ATOMIC = (1<<3),
-       IB_ACCESS_MW_BIND       = (1<<4)
+enum rdma_access_flags {
+       RDMA_ACCESS_LOCAL_WRITE = 1,
+       RDMA_ACCESS_REMOTE_WRITE = (1<<1),
+       RDMA_ACCESS_REMOTE_READ = (1<<2),
+       RDMA_ACCESS_MW_BIND     = (1<<3),
+
+       /* IB specific */
+       IB_ACCESS_REMOTE_ATOMIC = (1 << 4),
+
+       /* iWARP Specific */
+       IWARP_ACCESS_LOCAL_READ = (1 << 5)
 };
 
 struct ib_phys_buf {
@@ -602,28 +819,28 @@
        u64      size;
 };
 
-struct ib_mr_attr {
-       struct ib_pd    *pd;
+struct rdma_mr_attr {
+       struct rdma_pd  *pd;
        u64             device_virt_addr;
        u64             size;
-       int             mr_access_flags;
+       enum rdma_access_flags mr_access_flags;
        u32             lkey;
        u32             rkey;
 };
 
-enum ib_mr_rereg_flags {
-       IB_MR_REREG_TRANS       = 1,
-       IB_MR_REREG_PD          = (1<<1),
-       IB_MR_REREG_ACCESS      = (1<<2)
+enum rdma_mr_rereg_flags {
+       RDMA_MR_REREG_TRANS     = 1,
+       RDMA_MR_REREG_PD        = (1<<1),
+       RDMA_MR_REREG_ACCESS    = (1<<2)
 };
 
-struct ib_mw_bind {
-       struct ib_mr   *mr;
+struct rdma_mw_bind {
+       struct rdma_mr  *mr;
        u64             wr_id;
        u64             addr;
        u32             length;
-       int             send_flags;
-       int             mw_access_flags;
+       enum rdma_send_flags send_flags;
+       enum rdma_access_flags access_flags;
 };
 
 struct ib_fmr_attr {
@@ -632,8 +849,8 @@
        u8      page_size;
 };
 
-struct ib_ucontext {
-       struct ib_device       *device;
+struct rdma_ucontext {
+       struct rdma_device      *device;
        struct list_head        pd_list;
        struct list_head        mr_list;
        struct list_head        mw_list;
@@ -644,14 +861,14 @@
        spinlock_t              lock;
 };
 
-struct ib_uobject {
+struct rdma_uobject {
        u64                     user_handle;    /* handle given to us by
userspace */
-       struct ib_ucontext     *context;        /* associated user
context */
+       struct rdma_ucontext    *context;       /* associated user
context */
        struct list_head        list;           /* link to context's
list */
        u32                     id;             /* index into kernel idr
*/
 };
 
-struct ib_umem {
+struct rdma_umem {
        unsigned long           user_base;
        unsigned long           virt_base;
        size_t                  length;
@@ -661,14 +878,14 @@
        struct list_head        chunk_list;
 };
 
-struct ib_umem_chunk {
+struct rdma_umem_chunk {
        struct list_head        list;
        int                     nents;
        int                     nmap;
        struct scatterlist      page_list[0];
 };
 
-struct ib_udata {
+struct rdma_udata {
        void __user *inbuf;
        void __user *outbuf;
        size_t       inlen;
@@ -676,79 +893,79 @@
 };
 
 #define IB_UMEM_MAX_PAGE_CHUNK
\
-       ((PAGE_SIZE - offsetof(struct ib_umem_chunk, page_list)) /
\
-        ((void *) &((struct ib_umem_chunk *) 0)->page_list[1] -
\
-         (void *) &((struct ib_umem_chunk *) 0)->page_list[0]))
+       ((PAGE_SIZE - offsetof(struct rdma_umem_chunk, page_list)) /
\
+        ((void *) &((struct rdma_umem_chunk *) 0)->page_list[1] -
\
+         (void *) &((struct rdma_umem_chunk *) 0)->page_list[0]))
 
-struct ib_umem_object {
-       struct ib_uobject       uobject;
-       struct ib_umem          umem;
+struct rdma_umem_object {
+       struct rdma_uobject     uobject;
+       struct rdma_umem        umem;
 };
 
-struct ib_pd {
-       struct ib_device       *device;
-       struct ib_uobject      *uobject;
+struct rdma_pd {
+       struct rdma_device      *device;
+       struct rdma_uobject     *uobject;
        atomic_t                usecnt; /* count all resources */
 };
 
 struct ib_ah {
-       struct ib_device        *device;
-       struct ib_pd            *pd;
-       struct ib_uobject      *uobject;
+       struct rdma_device      *device;
+       struct rdma_pd          *pd;
+       struct rdma_uobject     *uobject;
 };
 
-typedef void (*ib_comp_handler)(struct ib_cq *cq, void *cq_context);
+typedef void (*rdma_comp_handler)(struct rdma_cq *cq, void
*cq_context);
 
-struct ib_cq {
-       struct ib_device       *device;
-       struct ib_uobject      *uobject;
-       ib_comp_handler         comp_handler;
-       void                  (*event_handler)(struct ib_event *, void
*);
+struct rdma_cq {
+       struct rdma_device      *device;
+       struct rdma_uobject     *uobject;
+       rdma_comp_hanlder       comp_handler;
+       void                  (*event_handler)(struct rdma_event *, void
*);
        void *                  cq_context;
        int                     cqe;
        atomic_t                usecnt; /* count number of work queues
*/
 };
 
-struct ib_srq {
-       struct ib_device        *device;
-       struct ib_uobject       *uobject;
-       struct ib_pd            *pd;
+struct rdma_srq {
+       struct rdma_device      *device;
+       struct rdma_uobject     *uobject;
+       struct rdma_pd          *pd;
        void                    *srq_context;
        atomic_t                usecnt;
 };
 
-struct ib_qp {
-       struct ib_device       *device;
-       struct ib_pd           *pd;
-       struct ib_cq           *send_cq;
-       struct ib_cq           *recv_cq;
-       struct ib_srq          *srq;
-       struct ib_uobject      *uobject;
-       void                  (*event_handler)(struct ib_event *, void
*);
+struct rdma_qp {
+       struct rdma_device     *device;
+       struct rdma_pd         *pd;
+       struct rdma_cq         *send_cq;
+       struct rdma_cq         *recv_cq;
+       struct rdma_srq        *srq;
+       struct rdma_uobject    *uobject;
+       void                   (*event_handler)(struct rdma_event *,
void *);
        void                   *qp_context;
-       u32                     qp_num;
-       enum ib_qp_type         qp_type;
+       u32                    qp_num;
+       enum rdma_qp_type      qp_type;
 };
 
-struct ib_mr {
-       struct ib_device  *device;
-       struct ib_pd      *pd;
-       struct ib_uobject *uobject;
+struct rdma_mr {
+       struct rdma_device      *device;
+       struct rdma_pd          *pd;
+       struct rdma_uobject     *uobject;
        u32                lkey;
        u32                rkey;
        atomic_t           usecnt; /* count number of MWs */
 };
 
-struct ib_mw {
-       struct ib_device        *device;
-       struct ib_pd            *pd;
-       struct ib_uobject       *uobject;
+struct rdma_mw {
+       struct rdma_device      *device;
+       struct rdma_pd          *pd;
+       struct rdma_uobject     *uobject;
        u32                     rkey;
 };
 
 struct ib_fmr {
-       struct ib_device        *device;
-       struct ib_pd            *pd;
+       struct rdma_device      *device;
+       struct rdma_pd          *pd;
        struct list_head        list;
        u32                     lkey;
        u32                     rkey;
@@ -770,19 +987,19 @@
        IB_MAD_RESULT_CONSUMED = 1 << 2  /* Packet consumed: stop
processing */
 };
 
-#define IB_DEVICE_NAME_MAX 64
+#define RDMA_DEVICE_NAME_MAX 64
 
 struct ib_cache {
        rwlock_t                lock;
-       struct ib_event_handler event_handler;
+       struct rdma_event_handler event_handler;
        struct ib_pkey_cache  **pkey_cache;
        struct ib_gid_cache   **gid_cache;
 };
 
-struct ib_device {
+struct rdma_device {
        struct device                *dma_device;
 
-       char                          name[IB_DEVICE_NAME_MAX];
+       char                          name[RDMA_DEVICE_NAME_MAX];
 
        struct list_head              event_handler_list;
        spinlock_t                    event_handler_lock;
@@ -795,94 +1012,116 @@
 
        u32                           flags;
 
-       int                        (*query_device)(struct ib_device
*device,
-                                                  struct ib_device_attr
*device_attr);
-       int                        (*query_port)(struct ib_device
*device,
+       int                        (*query_device)(struct rdma_device
*device,
+                                                  struct
rdma_device_attr *device_attr);
+       int                        (*query_port)(struct rdma_device
*device,
                                                 u8 port_num,
-                                                struct ib_port_attr
*port_attr);
-       int                        (*query_gid)(struct ib_device
*device,
+                                                union rdma_port_attr
*port_attr);
+       int                        (*query_gid)(struct rdma_device
*device,
                                                u8 port_num, int index,
                                                union ib_gid *gid);
-       int                        (*query_pkey)(struct ib_device
*device,
+       int                        (*query_pkey)(struct rdma_device
*device,
                                                 u8 port_num, u16 index,
u16 *pkey);
-       int                        (*modify_device)(struct ib_device
*device,
+       int                        (*modify_device)(struct rdma_device
*device,
                                                    int
device_modify_mask,
-                                                   struct
ib_device_modify *device_modify);
-       int                        (*modify_port)(struct ib_device
*device,
+                                                   struct
rdma_device_modify *device_modify);
+       int                        (*modify_port)(struct rdma_device
*device,
                                                  u8 port_num, int
port_modify_mask,
                                                  struct ib_port_modify
*port_modify);
-       struct ib_ucontext *       (*alloc_ucontext)(struct ib_device
*device,
-                                                    struct ib_udata
*udata);
-       int                        (*dealloc_ucontext)(struct
ib_ucontext *context);
-       int                        (*mmap)(struct ib_ucontext *context,
+       struct rdma_ucontext *       (*alloc_ucontext)(struct
rdma_device *device,
+                                                    struct rdma_udata
*udata);
+       int                        (*dealloc_ucontext)(struct
rdma_ucontext *context);
+       int                        (*mmap)(struct rdma_ucontext
*context,
                                           struct vm_area_struct *vma);
-       struct ib_pd *             (*alloc_pd)(struct ib_device *device,
-                                              struct ib_ucontext
*context,
-                                              struct ib_udata *udata);
-       int                        (*dealloc_pd)(struct ib_pd *pd);
-       struct ib_ah *             (*create_ah)(struct ib_pd *pd,
+       struct rdma_pd *             (*alloc_pd)(struct rdma_device
*device,
+                                              struct rdma_ucontext
*context,
+                                              struct rdma_udata
*udata);
+       int                        (*dealloc_pd)(struct rdma_pd *pd);
+       struct ib_ah *             (*create_ah)(struct rdma_pd *pd,
                                                struct ib_ah_attr
*ah_attr);
        int                        (*modify_ah)(struct ib_ah *ah,
                                                struct ib_ah_attr
*ah_attr);
        int                        (*query_ah)(struct ib_ah *ah,
                                               struct ib_ah_attr
*ah_attr);
        int                        (*destroy_ah)(struct ib_ah *ah);
-       struct ib_qp *             (*create_qp)(struct ib_pd *pd,
-                                               struct ib_qp_init_attr
*qp_init_attr,
-                                               struct ib_udata *udata);
-       int                        (*modify_qp)(struct ib_qp *qp,
-                                               struct ib_qp_attr
*qp_attr,
+       struct rdma_qp *           (*create_qp)(struct rdma_pd *pd,
+                                               struct rdma_qp_init_attr
*qp_init_attr,
+                                               struct rdma_udata
*udata);
+       int                        (*modify_qp)(struct rdma_qp *qp,
+                                               struct rdma_qp_attr
*qp_attr,
                                                int qp_attr_mask);
-       int                        (*query_qp)(struct ib_qp *qp,
-                                              struct ib_qp_attr
*qp_attr,
+       int                        (*query_qp)(struct rdma_qp *qp,
+                                              struct rdma_qp_attr
*qp_attr,
                                               int qp_attr_mask,
-                                              struct ib_qp_init_attr
*qp_init_attr);
-       int                        (*destroy_qp)(struct ib_qp *qp);
-       int                        (*post_send)(struct ib_qp *qp,
-                                               struct ib_send_wr
*send_wr,
-                                               struct ib_send_wr
**bad_send_wr);
-       int                        (*post_recv)(struct ib_qp *qp,
-                                               struct ib_recv_wr
*recv_wr,
-                                               struct ib_recv_wr
**bad_recv_wr);
-       struct ib_cq *             (*create_cq)(struct ib_device
*device, int cqe,
-                                               struct ib_ucontext
*context,
-                                               struct ib_udata *udata);
-       int                        (*destroy_cq)(struct ib_cq *cq);
-       int                        (*resize_cq)(struct ib_cq *cq, int
*cqe);
-       int                        (*poll_cq)(struct ib_cq *cq, int
num_entries,
-                                             struct ib_wc *wc);
-       int                        (*peek_cq)(struct ib_cq *cq, int
wc_cnt);
-       int                        (*req_notify_cq)(struct ib_cq *cq,
+                                              struct rdma_qp_init_attr
*qp_init_attr);
+       int                        (*destroy_qp)(struct rdma_qp *qp);
+       int                        (*post_send)(struct rdma_qp *qp,
+                                               struct rdma_send_wr
*send_wr,
+                                               struct rdma_send_wr
**bad_send_wr);
+       int                        (*post_recv)(struct rdma_qp *qp,
+                                               struct rdma_recv_wr
*recv_wr,
+                                               struct rdma_recv_wr
**bad_recv_wr);
+       struct rdma_cq *           (*create_cq)(struct rdma_device
*device, int cqe,
+                                               struct rdma_ucontext
*context,
+                                               struct rdma_udata
*udata);
+       int                        (*destroy_cq)(struct rdma_cq *cq);
+       int                        (*resize_cq)(struct rdma_cq *cq, int
*cqe);
+       int                        (*poll_cq)(struct rdma_cq *cq, int
num_entries,
+                                             struct rdma_wc *wc);
+       int                        (*peek_cq)(struct rdma_cq *cq, int
wc_cnt);
+       int                        (*req_notify_cq)(struct rdma_cq *cq,
                                                    enum ib_cq_notify
cq_notify);
-       int                        (*req_ncomp_notif)(struct ib_cq *cq,
+       int                        (*req_ncomp_notif)(struct rdma_cq
*cq,
                                                      int wc_cnt);
-       struct ib_mr *             (*get_dma_mr)(struct ib_pd *pd,
+       struct rdma_mr *           (*alloc_mr)(struct rdma_pd *pd,
+                                               enum rdma_access_flags
mr_access_flags,
+                                               u32 *addr_list_len);
+       struct rdma_mr *           (*get_dma_mr)(struct rdma_pd *pd,
                                                 int mr_access_flags);
-       struct ib_mr *             (*reg_phys_mr)(struct ib_pd *pd,
+       struct rdma_mr *           (*reg_phys_mr)(struct rdma_pd *pd,
                                                  struct ib_phys_buf
*phys_buf_array,
                                                  int num_phys_buf,
                                                  int mr_access_flags,
                                                  u64 *iova_start);
-       struct ib_mr *             (*reg_user_mr)(struct ib_pd *pd,
-                                                 struct ib_umem
*region,
+       struct rdma_mr *           (*reg_phys_mr_fixed)(struct rdma_pd
*pd,
+                                                 u64 *phys_buf_array,
+                                                 unsigned num_phys_buf,
+                                                 unsigned pble_size,
+                                                 enum rdma_access_flags
mr_access_flags,
+                                                 u64 *iova_start);
+       struct rdma_mr *           (*reg_user_mr)(struct rdma_pd *pd,
+                                                 struct rdma_umem
*region,
                                                  int mr_access_flags,
-                                                 struct ib_udata
*udata);
-       int                        (*query_mr)(struct ib_mr *mr,
-                                              struct ib_mr_attr
*mr_attr);
-       int                        (*dereg_mr)(struct ib_mr *mr);
-       int                        (*rereg_phys_mr)(struct ib_mr *mr,
+                                                 struct rdma_udata
*udata);
+       struct rdma_mr *           (*reg_shared_mr)(struct rdma_pd *pd,
+                                                   struct rdma_mr
*existing_mr,
+                                                   int mr_access_flags,
+                                                   u64 *iova_start);
+       int                        (*query_mr)(struct rdma_mr *mr,
+                                              struct rdma_mr_attr
*mr_attr);
+       int                        (*dereg_mr)(struct rdma_mr *mr);
+       int                        (*rereg_phys_mr)(struct rdma_mr *mr,
                                                    int mr_rereg_mask,
-                                                   struct ib_pd *pd,
+                                                   struct rdma_pd *pd,
                                                    struct ib_phys_buf
*phys_buf_array,
                                                    int num_phys_buf,
                                                    int mr_access_flags,
                                                    u64 *iova_start);
-       struct ib_mw *             (*alloc_mw)(struct ib_pd *pd);
-       int                        (*bind_mw)(struct ib_qp *qp,
-                                             struct ib_mw *mw,
-                                             struct ib_mw_bind
*mw_bind);
-       int                        (*dealloc_mw)(struct ib_mw *mw);
-       struct ib_fmr *            (*alloc_fmr)(struct ib_pd *pd,
+       int                        (*rereg_phys_mr_fixed)(struct rdma_mr
*mr,
+                                                   int mr_rereg_mask,
+                                                   struct rdma_pd *pd,
+                                                   u64 *phys_buf_array,
+                                                   int num_phys_buf,
+                                                   u32 pble_size,
+                                                   enum
rdma_access_flags mr_access_flags,
+                                                   u64 *iova_start);
+       struct rdma_mw *           (*alloc_mw)(struct rdma_pd *pd);
+       struct rdma_mw *           (*alloc_mw2)(struct rdma_pd *pd);
+       int                        (*bind_mw)(struct rdma_qp *qp,
+                                             struct rdma_mw *mw,
+                                             struct rdma_mw_bind
*mw_bind);
+       int                        (*dealloc_mw)(struct rdma_mw *mw);
+       struct ib_fmr *            (*alloc_fmr)(struct rdma_pd *pd,
                                                int mr_access_flags,
                                                struct ib_fmr_attr
*fmr_attr);
        int                        (*map_phys_fmr)(struct ib_fmr *fmr,
@@ -890,13 +1129,13 @@
                                                   u64 iova);
        int                        (*unmap_fmr)(struct list_head
*fmr_list);
        int                        (*dealloc_fmr)(struct ib_fmr *fmr);
-       int                        (*attach_mcast)(struct ib_qp *qp,
+       int                        (*attach_mcast)(struct rdma_qp *qp,
                                                   union ib_gid *gid,
                                                   u16 lid);
-       int                        (*detach_mcast)(struct ib_qp *qp,
+       int                        (*detach_mcast)(struct rdma_qp *qp,
                                                   union ib_gid *gid,
                                                   u16 lid);
-       int                        (*process_mad)(struct ib_device
*device,
+       int                        (*process_mad)(struct rdma_device
*device,
                                                  int process_mad_flags,
                                                  u8 port_num,
                                                  struct ib_wc *in_wc,
@@ -919,75 +1158,75 @@
        u8                           phys_port_cnt;
 };
 
-struct ib_client {
+struct rdma_client {
        char  *name;
-       void (*add)   (struct ib_device *);
-       void (*remove)(struct ib_device *);
+       void (*add)   (struct rdma_device *);
+       void (*remove)(struct rdma_device *);
 
        struct list_head list;
 };
 
-struct ib_device *ib_alloc_device(size_t size);
-void ib_dealloc_device(struct ib_device *device);
+struct rdma_device *rdma_alloc_device(size_t size);
+void rdma_dealloc_device(struct rdma_device *device);
 
-int ib_register_device   (struct ib_device *device);
-void ib_unregister_device(struct ib_device *device);
+int rdma_register_device   (struct rdma_device *device);
+void rdma_unregister_device(struct rdma_device *device);
 
-int ib_register_client   (struct ib_client *client);
-void ib_unregister_client(struct ib_client *client);
+int rdma_register_client   (struct rdma_client *client);
+void rdma_unregister_client(struct rdma_client *client);
 
-void *ib_get_client_data(struct ib_device *device, struct ib_client
*client);
-void  ib_set_client_data(struct ib_device *device, struct ib_client
*client,
+void *rdma_get_client_data(struct rdma_device *device, struct
rdma_client *client);
+void  rdma_set_client_data(struct rdma_device *device, struct
rdma_client *client,
                         void *data);
 
-static inline int ib_copy_from_udata(void *dest, struct ib_udata
*udata, size_t len)
+static inline int rdma_copy_from_udata(void *dest, struct rdma_udata
*udata, size_t len)
 {
        return copy_from_user(dest, udata->inbuf, len) ? -EFAULT : 0;
 }
 
-static inline int ib_copy_to_udata(struct ib_udata *udata, void *src,
size_t len)
+static inline int rdma_copy_to_udata(struct rdma_udata *udata, void
*src, size_t len)
 {
        return copy_to_user(udata->outbuf, src, len) ? -EFAULT : 0;
 }
 
-int ib_register_event_handler  (struct ib_event_handler
*event_handler);
-int ib_unregister_event_handler(struct ib_event_handler
*event_handler);
-void ib_dispatch_event(struct ib_event *event);
+int rdma_register_event_handler  (struct rdma_event_handler
*event_handler);
+int rdma_unregister_event_handler(struct rdma_event_handler
*event_handler);
+void rdma_dispatch_event(struct rdma_event *event);
 
-int ib_query_device(struct ib_device *device,
-                   struct ib_device_attr *device_attr);
+int rdma_query_device(struct rdma_device *device,
+                   struct rdma_device_attr *device_attr);
 
-int ib_query_port(struct ib_device *device,
-                 u8 port_num, struct ib_port_attr *port_attr);
+int rdma_query_port(struct rdma_device *device,
+                 u8 port_num, union rdma_port_attr *port_attr);
 
-int ib_query_gid(struct ib_device *device,
+int rdma_query_gid(struct rdma_device *device,
                 u8 port_num, int index, union ib_gid *gid);
 
-int ib_query_pkey(struct ib_device *device,
+int rdma_query_pkey(struct rdma_device *device,
                  u8 port_num, u16 index, u16 *pkey);
 
-int ib_modify_device(struct ib_device *device,
+int rdma_modify_device(struct rdma_device *device,
                     int device_modify_mask,
-                    struct ib_device_modify *device_modify);
+                    struct rdma_device_modify *device_modify);
 
-int ib_modify_port(struct ib_device *device,
+int rdma_modify_port(struct rdma_device *device,
                   u8 port_num, int port_modify_mask,
                   struct ib_port_modify *port_modify);
 
 /**
- * ib_alloc_pd - Allocates an unused protection domain.
+ * rdma_alloc_pd - Allocates an unused protection domain.
  * @device: The device on which to allocate the protection domain.
  *
  * A protection domain object provides an association between QPs,
shared
  * receive queues, address handles, memory regions, and memory windows.
  */
-struct ib_pd *ib_alloc_pd(struct ib_device *device);
+struct rdma_pd *rdma_alloc_pd(struct rdma_device *device);
 
 /**
- * ib_dealloc_pd - Deallocates a protection domain.
+ * rdma_dealloc_pd - Deallocates a protection domain.
  * @pd: The protection domain to deallocate.
  */
-int ib_dealloc_pd(struct ib_pd *pd);
+int rdma_dealloc_pd(struct rdma_pd *pd);
 
 /**
  * ib_create_ah - Creates an address handle for the given address
vector.
@@ -997,7 +1236,7 @@
  * The address handle is used to reference a local or global
destination
  * in all UD QP post sends.
  */
-struct ib_ah *ib_create_ah(struct ib_pd *pd, struct ib_ah_attr
*ah_attr);
+struct ib_ah *ib_create_ah(struct rdma_pd *pd, struct ib_ah_attr
*ah_attr);
 
 /**
  * ib_create_ah_from_wc - Creates an address handle associated with the
@@ -1011,7 +1250,7 @@
  * The address handle is used to reference a local or global
destination
  * in all UD QP post sends.
  */
-struct ib_ah *ib_create_ah_from_wc(struct ib_pd *pd, struct ib_wc *wc,
+struct ib_ah *ib_create_ah_from_wc(struct rdma_pd *pd, struct ib_wc
*wc,
                                   struct ib_grh *grh, u8 port_num);
 
 /**
@@ -1039,16 +1278,16 @@
 int ib_destroy_ah(struct ib_ah *ah);
 
 /**
- * ib_create_qp - Creates a QP associated with the specified protection
+ * rdma_create_qp - Creates a QP associated with the specified
protection
  *   domain.
  * @pd: The protection domain associated with the QP.
  * @qp_init_attr: A list of initial attributes required to create the
QP.
  */
-struct ib_qp *ib_create_qp(struct ib_pd *pd,
-                          struct ib_qp_init_attr *qp_init_attr);
+struct rdma_qp *ib_create_qp(struct rdma_pd *pd,
+                          struct rdma_qp_init_attr *qp_init_attr);
 
 /**
- * ib_modify_qp - Modifies the attributes for the specified QP and then
+ * rdma_modify_qp - Modifies the attributes for the specified QP and
then
  *   transitions the QP to the given state.
  * @qp: The QP to modify.
  * @qp_attr: On input, specifies the QP attributes to modify.  On
output,
@@ -1056,12 +1295,12 @@
  * @qp_attr_mask: A bit-mask used to specify which attributes of the QP
  *   are being modified.
  */
-int ib_modify_qp(struct ib_qp *qp,
-                struct ib_qp_attr *qp_attr,
+int rdma_modify_qp(struct rdma_qp *qp,
+                struct rdma_qp_attr *qp_attr,
                 int qp_attr_mask);
 
 /**
- * ib_query_qp - Returns the attribute list and current values for the
+ * rdma_query_qp - Returns the attribute list and current values for
the
  *   specified QP.
  * @qp: The QP to query.
  * @qp_attr: The attributes of the specified QP.
@@ -1071,16 +1310,16 @@
  * The qp_attr_mask may be used to limit the query to gathering only
the
  * selected attributes.
  */
-int ib_query_qp(struct ib_qp *qp,
-               struct ib_qp_attr *qp_attr,
+int rdma_query_qp(struct rdma_qp *qp,
+               struct rdma_qp_attr *qp_attr,
                int qp_attr_mask,
-               struct ib_qp_init_attr *qp_init_attr);
+               struct rdma_qp_init_attr *qp_init_attr);
 
 /**
- * ib_destroy_qp - Destroys the specified QP.
+ * rdma_destroy_qp - Destroys the specified QP.
  * @qp: The QP to destroy.
  */
-int ib_destroy_qp(struct ib_qp *qp);
+int rdma_destroy_qp(struct rdma_qp *qp);
 
 /**
  * ib_post_send - Posts a list of work requests to the send queue of
@@ -1090,30 +1329,30 @@
  * @bad_send_wr: On an immediate failure, this parameter will reference
  *   the work request that failed to be posted on the QP.
  */
-static inline int ib_post_send(struct ib_qp *qp,
-                              struct ib_send_wr *send_wr,
-                              struct ib_send_wr **bad_send_wr)
+static inline int rdma_post_send(struct rdma_qp *qp,
+                              struct rdma_send_wr *send_wr,
+                              struct rdma_send_wr **bad_send_wr)
 {
        return qp->device->post_send(qp, send_wr, bad_send_wr);
 }
 
 /**
- * ib_post_recv - Posts a list of work requests to the receive queue of
+ * rdma_post_recv - Posts a list of work requests to the receive queue
of
  *   the specified QP.
  * @qp: The QP to post the work request on.
  * @recv_wr: A list of work requests to post on the receive queue.
  * @bad_recv_wr: On an immediate failure, this parameter will reference
  *   the work request that failed to be posted on the QP.
  */
-static inline int ib_post_recv(struct ib_qp *qp,
-                              struct ib_recv_wr *recv_wr,
-                              struct ib_recv_wr **bad_recv_wr)
+static inline int rdma_post_recv(struct rdma_qp *qp,
+                              struct rdma_recv_wr *recv_wr,
+                              struct rdma_recv_wr **bad_recv_wr)
 {
        return qp->device->post_recv(qp, recv_wr, bad_recv_wr);
 }
 
 /**
- * ib_create_cq - Creates a CQ on the specified device.
+ * rdma_create_cq - Creates a CQ on the specified device.
  * @device: The device on which to create the CQ.
  * @comp_handler: A user-specified callback that is invoked when a
  *   completion event occurs on the CQ.
@@ -1125,31 +1364,31 @@
  *
  * Users can examine the cq structure to determine the actual CQ size.
  */
-struct ib_cq *ib_create_cq(struct ib_device *device,
-                          ib_comp_handler comp_handler,
-                          void (*event_handler)(struct ib_event *, void
*),
+struct rdma_cq *rdma_create_cq(struct rdma_device *device,
+                          rdma_comp_hanlder comp_handler,
+                          void (*event_handler)(struct rdma_event *,
void *),
                           void *cq_context, int cqe);
 
 /**
- * ib_resize_cq - Modifies the capacity of the CQ.
+ * rdma_resize_cq - Modifies the capacity of the CQ.
  * @cq: The CQ to resize.
  * @cqe: The minimum size of the CQ.
  *
  * Users can examine the cq structure to determine the actual CQ size.
  */
-int ib_resize_cq(struct ib_cq *cq, int cqe);
+int rdma_resize_cq(struct rdma_cq *cq, int cqe);
 
 /**
- * ib_destroy_cq - Destroys the specified CQ.
+ * rdma_destroy_cq - Destroys the specified CQ.
  * @cq: The CQ to destroy.
  */
-int ib_destroy_cq(struct ib_cq *cq);
+int rdma_destroy_cq(struct rdma_cq *cq);
 
 /**
- * ib_poll_cq - poll a CQ for completion(s)
+ * rdma_poll_cq - poll a CQ for completion(s)
  * @cq:the CQ being polled
  * @num_entries:maximum number of completions to return
- * @wc:array of at least @num_entries &struct ib_wc where completions
+ * @wc:array of at least @num_entries &struct rdma_wc where completions
  *   will be returned
  *
  * Poll a CQ for (possibly multiple) completions.  If the return value
@@ -1157,14 +1396,14 @@
  * number of completions returned.  If the return value is
  * non-negative and < num_entries, then the CQ was emptied.
  */
-static inline int ib_poll_cq(struct ib_cq *cq, int num_entries,
-                            struct ib_wc *wc)
+static inline int rdma_poll_cq(struct rdma_cq *cq, int num_entries,
+                            struct rdma_wc *wc)
 {
        return cq->device->poll_cq(cq, num_entries, wc);
 }
 
 /**
- * ib_peek_cq - Returns the number of unreaped completions currently
+ * rdma_peek_cq - Returns the number of unreaped completions currently
  *   on the specified CQ.
  * @cq: The CQ to peek.
  * @wc_cnt: A minimum number of unreaped completions to check for.
@@ -1173,29 +1412,29 @@
  * this function returns wc_cnt, otherwise, it returns the actual
number of
  * unreaped completions.
  */
-int ib_peek_cq(struct ib_cq *cq, int wc_cnt);
+int rdma_peek_cq(struct rdma_cq *cq, int wc_cnt);
 
 /**
- * ib_req_notify_cq - Request completion notification on a CQ.
+ * rdma_req_notify_cq - Request completion notification on a CQ.
  * @cq: The CQ to generate an event for.
  * @cq_notify: If set to %IB_CQ_SOLICITED, completion notification will
  *   occur on the next solicited event. If set to %IB_CQ_NEXT_COMP,
  *   notification will occur on the next completion.
  */
-static inline int ib_req_notify_cq(struct ib_cq *cq,
+static inline int rdma_req_notify_cq(struct rdma_cq *cq,
                                   enum ib_cq_notify cq_notify)
 {
        return cq->device->req_notify_cq(cq, cq_notify);
 }
 
 /**
- * ib_req_ncomp_notif - Request completion notification when there are
+ * rdma_req_ncomp_notif - Request completion notification when there
are
  *   at least the specified number of unreaped completions on the CQ.
  * @cq: The CQ to generate an event for.
  * @wc_cnt: The number of unreaped completions that should be on the
  *   CQ before an event is generated.
  */
-static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt)
+static inline int rdma_req_ncomp_notif(struct rdma_cq *cq, int wc_cnt)
 {
        return cq->device->req_ncomp_notif ?
                cq->device->req_ncomp_notif(cq, wc_cnt) :
@@ -1203,15 +1442,15 @@
 }
 
 /**
- * ib_get_dma_mr - Returns a memory region for system memory that is
+ * rdma_get_dma_mr - Returns a memory region for system memory that is
  *   usable for DMA.
  * @pd: The protection domain associated with the memory region.
  * @mr_access_flags: Specifies the memory access rights.
  */
-struct ib_mr *ib_get_dma_mr(struct ib_pd *pd, int mr_access_flags);
+struct rdma_mr *rdma_get_dma_mr(struct rdma_pd *pd, int
mr_access_flags);
 
 /**
- * ib_reg_phys_mr - Prepares a virtually addressed memory region for
use
+ * rdma_reg_phys_mr - Prepares a virtually addressed memory region for
use
  *   by an HCA.
  * @pd: The protection domain associated assigned to the registered
region.
  * @phys_buf_array: Specifies a list of physical buffers to use in the
@@ -1220,64 +1459,120 @@
  * @mr_access_flags: Specifies the memory access rights.
  * @iova_start: The offset of the region's starting I/O virtual
address.
  */
-struct ib_mr *ib_reg_phys_mr(struct ib_pd *pd,
+struct rdma_mr *rdma_reg_phys_mr(struct rdma_pd *pd,
                             struct ib_phys_buf *phys_buf_array,
                             int num_phys_buf,
                             int mr_access_flags,
                             u64 *iova_start);
 
 /**
- * ib_rereg_phys_mr - Modifies the attributes of an existing memory
region.
+ * rdma_reg_phys_mr_fixed - Prepares a virtually addressed memory
region for use
+ *     use by an RNIC with fixed page/block sizes.
+ * @pd: The protection domain associated assigned to the registered
region.
+ * @phys_buf_array: Specifies a list of physical buffers to use in the
+ *   memory region.
+ * @num_phys_buf: Specifies the size of the phys_buf_array.
+ * @pble_size: Size of each page/block in phys_buf_array
+ * @mr_access_flags: Specifies the memory access rights.
+ * @iova_start: The offset of the region's starting I/O virtual
address.
+ */
+struct rdma_mr *rdma_reg_phys_mr(struct rdma_pd *pd,
+                            u64 *phys_buf_array,
+                            int num_phys_buf,
+                            u32 pble_size,
+                            int mr_access_flags,
+                            u64 *iova_start);
+/**
+ * rdma_rereg_phys_mr - Modifies the attributes of an existing memory
region.
  *   Conceptually, this call performs the functions deregister memory
region
  *   followed by register physical memory region.  Where possible,
  *   resources are reused instead of deallocated and reallocated.
  * @mr: The memory region to modify.
  * @mr_rereg_mask: A bit-mask used to indicate which of the following
  *   properties of the memory region are being modified.
- * @pd: If %IB_MR_REREG_PD is set in mr_rereg_mask, this field
specifies
+ * @pd: If %RDMA_MR_REREG_PD is set in mr_rereg_mask, this field
specifies
  *   the new protection domain to associated with the memory region,
  *   otherwise, this parameter is ignored.
- * @phys_buf_array: If %IB_MR_REREG_TRANS is set in mr_rereg_mask, this
+ * @phys_buf_array: If %RDMA_MR_REREG_TRANS is set in mr_rereg_mask,
this
  *   field specifies a list of physical buffers to use in the new
  *   translation, otherwise, this parameter is ignored.
- * @num_phys_buf: If %IB_MR_REREG_TRANS is set in mr_rereg_mask, this
+ * @num_phys_buf: If %RDMA_MR_REREG_TRANS is set in mr_rereg_mask, this
  *   field specifies the size of the phys_buf_array, otherwise, this
  *   parameter is ignored.
- * @mr_access_flags: If %IB_MR_REREG_ACCESS is set in mr_rereg_mask,
this
+ * @mr_access_flags: If %RDMA_MR_REREG_ACCESS is set in mr_rereg_mask,
this
  *   field specifies the new memory access rights, otherwise, this
  *   parameter is ignored.
  * @iova_start: The offset of the region's starting I/O virtual
address.
  */
-int ib_rereg_phys_mr(struct ib_mr *mr,
+int rdma_rereg_phys_mr(struct rdma_mr *mr,
                     int mr_rereg_mask,
-                    struct ib_pd *pd,
+                    struct rdma_pd *pd,
                     struct ib_phys_buf *phys_buf_array,
                     int num_phys_buf,
                     int mr_access_flags,
                     u64 *iova_start);
 
 /**
- * ib_query_mr - Retrieves information about a specific memory region.
+ * rdma_rereg_phys_mr_fixed - Modifies the attributes of an existing
memory region.
+ *   Conceptually, this call performs the functions deregister memory
region
+ *   followed by register physical memory region.  Where possible,
+ *   resources are reused instead of deallocated and reallocated.
+ *   A fixed size for each page/block is supplied as a distinct
parameter
+ * @mr: The memory region to modify.
+ * @mr_rereg_mask: A bit-mask used to indicate which of the following
+ *   properties of the memory region are being modified.
+ * @pd: If %RDMA_MR_REREG_PD is set in mr_rereg_mask, this field
specifies
+ *   the new protection domain to associated with the memory region,
+ *   otherwise, this parameter is ignored.
+ * @phys_buf_array: If %RDMA_MR_REREG_TRANS is set in mr_rereg_mask,
this
+ *   field specifies a list of physical buffers to use in the new
+ *   translation, otherwise, this parameter is ignored.
+ * @num_phys_buf: If %RDMA_MR_REREG_TRANS is set in mr_rereg_mask, this
+ *   field specifies the size of the phys_buf_array, otherwise, this
+ *   parameter is ignored.
+ * @pble_size: size of each page/block in phys_buf_array.
+ * @mr_access_flags: If %RDMA_MR_REREG_ACCESS is set in mr_rereg_mask,
this
+ *   field specifies the new memory access rights, otherwise, this
+ *   parameter is ignored.
+ * @iova_start: The offset of the region's starting I/O virtual
address.
+ */
+int rdma_rereg_phys_mr_fixed (struct rdma_mr *mr,
+                    int mr_rereg_mask,
+                    struct rdma_pd *pd,
+                    u64 *phys_buf_array,
+                    int num_phys_buf,
+                    u32 pble_size,
+                    int mr_access_flags,
+                    u64 *iova_start);
+
+/**
+ * rdma_query_mr - Retrieves information about a specific memory
region.
  * @mr: The memory region to retrieve information about.
  * @mr_attr: The attributes of the specified memory region.
  */
-int ib_query_mr(struct ib_mr *mr, struct ib_mr_attr *mr_attr);
+int rdma_query_mr(struct rdma_mr *mr, struct rdma_mr_attr *mr_attr);
 
 /**
- * ib_dereg_mr - Deregisters a memory region and removes it from the
+ * rdma_dereg_mr - Deregisters a memory region and removes it from the
  *   HCA translation table.
  * @mr: The memory region to deregister.
  */
-int ib_dereg_mr(struct ib_mr *mr);
+int rdma_dereg_mr(struct rdma_mr *mr);
 
 /**
- * ib_alloc_mw - Allocates a memory window.
+ * rdma_alloc_mw - Allocates a memory window.
  * @pd: The protection domain associated with the memory window.
  */
-struct ib_mw *ib_alloc_mw(struct ib_pd *pd);
+struct rdma_mw *ib_alloc_mw(struct rdma_pd *pd);
 
 /**
- * ib_bind_mw - Posts a work request to the send queue of the specified
+ * rdma_alloc_mw_narrow - Allocates a narrow (type 2) memory window.
+ * @pd: The protection domain associated with the memory window.
+ */
+struct rdma_mw *ib_alloc_mw(struct rdma_pd *pd);
+
+/**
+ * rdma_bind_mw - Posts a work request to the send queue of the
specified
  *   QP, which binds the memory window to the given address range and
  *   remote access attributes.
  * @qp: QP to post the bind work request on.
@@ -1285,9 +1580,9 @@
  * @mw_bind: Specifies information about the memory window, including
  *   its address range, remote access rights, and associated memory
region.
  */
-static inline int ib_bind_mw(struct ib_qp *qp,
-                            struct ib_mw *mw,
-                            struct ib_mw_bind *mw_bind)
+static inline int rdma_bind_mw(struct rdma_qp *qp,
+                            struct rdma_mw *mw,
+                            struct rdma_mw_bind *mw_bind)
 {
        /* XXX reference counting in corresponding MR? */
        return mw->device->bind_mw ?
@@ -1296,10 +1591,10 @@
 }
 
 /**
- * ib_dealloc_mw - Deallocates a memory window.
+ * rdma_dealloc_mw - Deallocates a memory window.
  * @mw: The memory window to deallocate.
  */
-int ib_dealloc_mw(struct ib_mw *mw);
+int rdma_dealloc_mw(struct rdma_mw *mw);
 
 /**
  * ib_alloc_fmr - Allocates a unmapped fast memory region.
@@ -1310,7 +1605,7 @@
  * A fast memory region must be mapped before it can be used as part of
  * a work request.
  */
-struct ib_fmr *ib_alloc_fmr(struct ib_pd *pd,
+struct ib_fmr *ib_alloc_fmr(struct rdma_pd *pd,
                            int mr_access_flags,
                            struct ib_fmr_attr *fmr_attr);
 
@@ -1352,7 +1647,7 @@
  * the fabric appropriately.  The port associated with the specified
  * QP must also be a member of the multicast group.
  */
-int ib_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);
+int ib_attach_mcast(struct rdma_qp *qp, union ib_gid *gid, u16 lid);
 
 /**
  * ib_detach_mcast - Detaches the specified QP from a multicast group.
@@ -1360,6 +1655,6 @@
  * @gid: Multicast group GID.
  * @lid: Multicast group LID in host byte order.
  */
-int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);
+int ib_detach_mcast(struct rdma_qp *qp, union ib_gid *gid, u16 lid);
 
 #endif /* IB_VERBS_H */


_______________________________________________
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to