Lets say my typical usage pattern is: two nodes, alice and bob. two "tcp" connections via SDP, long lifetime, called the "data" and the "meta" socket.
alice sends on the "data" socket, typically transfer message sizes of 512 byte to 32 KiB, say. for each message received on the data socket bob then sends an "ack" back on the "meta" socket, message sizes of ~32 byte. [*] there are a few other messages on both sockets. I'd like to have maximum throughput when streaming large messages, but (of course) at the same time I'd like to have minimum latency for the short messages, and when sending only single requests. obviously if the cpu overhead can be minimized, that won't hurt either ;) now, which tunables in /sys/modules/ib_sdp/parameters are those that will most likely have some effect here? an "I don't now what I am tuning here, but I try anyways" approach gave me some benefit from using recv_poll 200 sdp_zcopy_thresh 8192, all else left at what the module chose itself. pointers to an overview about what those tunables actually do, or any recommendations (also for tunables in other modules, potentially, or sysctls, tcp or other socket options or whatnot) gladly accepted. [*] this is, of course, in fact DRBD (see http://www.drbd.org), the large messages on the "data" socket are replicated block device writes, bob needs to submit that data to its local IO subsystem, and only send out the "ack" messages on the "meta" socket once the corresponding write has been signalled as completed. -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. _______________________________________________ general mailing list [email protected] http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
