On 11/10/10 10:30, Andrea Gozzelino wrote:
Hi Jonathan,

I wrote down a test (latency and transfer speed) with RDMA.
Server and client work with the same code and they change defined size
buffers for n times (loop). In the makefile.txt, you can find an help to
use the code.

I tested Intel NetEffect NE020 E10G81GP cards with this code and I found
minimum latency about 11 us,maximum transfer speed about 9,6 GBytes, CPU
usage up to 90% on client side.
The last value is not good for us.

Hi Andrea,

Thanks for the code. With the advice from Jason I have changed my test program to get reliable communication using 1Mbyte buffers. The CPU usage is less than 2% on both client and server for 10Gb throughput. I have Chelsio S310CR.

I find using the poll() approach more natural as I have experience with conventional sockets based programming before.

The rdma_client/rdma_server example programs from librdmacm were the easiest to start from and I have incrementally changed them from synchronous to asynchronous operation, and moved the internals of the high level functions in <rdma/rdma_verbs.h> into my own code piece by piece. The learning curve is very steep :)

I found this paper quite interesting www.systems.ethz.ch/research/awards/minimizingthehidden.pdf

Cheers,
Jonathan.


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to