I have a QDR ib switch that should support up to 40Gbps. After installing the kernel-ib and lustre client rpms on my SuSe nodes I see the following:
hpc102:~ # ibstatus mlx4_0:1 Infiniband device 'mlx4_0' port 1 status: default gid: fe80:0000:0000:0000:0002:c903:0006:de19 base lid: 0x7 sm lid: 0x1 state: 4: ACTIVE phys state: 5: LinkUp rate: 20 Gb/sec (4X DDR) Why is this only picking up 4X DDR at 20Gb/sec? Do the lustre rpm's not support QDR? Is there something that I need to do on my side to force 40Gb/sec on these ports? Thanks in advance, -J
_______________________________________________ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss