We are having issues with LNET performance over Infiniband.  We have a 
configuration with a single MDT and six (6) OSTs.  The Lustre client I am using 
to test is configured to use 6 stripes (lfs setstripe -c  6 /mnt/lustre).  When 
I perform a test using the following command:

                dd if=/dev/zero of=/mnt/lustre/test.dat bs=1M count=2000

I typically get a write rate of about 815 MB/s, and we never exceed 848 MB/s.  
When I run obdfilter-survey, we easily get about 3-4GB/s write speed, but when 
I run a series of lnet-selftests, the read and write rates range from 850MB/s - 
875MB/s max.  I have performed the following optimizations to increase the data 
rate:

On the Client:
lctl set_param osc.*.checksums=0
lctl set_param osc.*.max_dirty_mb=256

On the OSTs
lctl set_param obdfilter.*.writethrough_cache_enable=0
lctl set_param obdfilter.*.read_cache_enable=0

echo 4096 > /sys/block/<devices>/queue/nr_requests

I have also loaded the ib_sdp module, which also brought an increase in speed.  
However, we need to be able to record at no less than 1GB/s, which we cannot 
achieve right now.  Any thoughts on how I can optimize LNET, which clearly 
seems to be the bottleneck?

Thank you for any help you can provide,
Carl Barberi
_______________________________________________
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to