Hi,


I was solving a similar problem about two years ago. And I think, that our 
problem was with InfiniBand drivers or InfiniBand switch firmware  (some 
version of drivers create unstable connections with "full" speed, some 
drivers makes very slow speed, but all were testing with the same version of
firmware of switch)




But I didn't test it after IB switch firmware update.




What is a topology of your IB connection and what version of firmware and 
drivers are you using?




V.

---------- Původní zpráva ----------
Od: Nicholas Mills <[email protected]>
Komu: [email protected]
Datum: 7. 10. 2016 22:17:07
Předmět: [Pvfs2-users] Slow performance with InfiniBand

"

All,



I'm having an issue with OrangeFS (trunk and v.2.9.5) performance on a 
cluster with QLogic QLE 7340 HCAs. I set up a file system on 8 nodes after 
configuring --with-openib  and --without-bmi-tcp. I mounted this file system
on a 9th node (the client) and wrote a 1 GB file, but the speed was only 
13.028 MB/s. If I re-install OrangeFS with TCP I can get 1175.388 MB/s when 
transferring a 863 GB file on the same nodes.




I also have access to another cluster with Mellanox MX354A HCAs. On this 
cluster I could get 492.629 MB/s when writing a 1 GB file with OrangeFS/
InfiniBand. I'm wondering if there is an issue with BMI on QLogic HCAs.




Thanks,




Nick Mills

Graduate Research Assistant

Clemson University


_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users";
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to