I installed the default QLogic drivers in Ubuntu 14.04.1 LTS. The kernel
module is qla4xxx.ko version 5.04.00-k1. I wasn't able to find a firmware
version with ibstat or or ibv_devinfo (pasted below). I'm not sure what you
mean by topology. All the nodes should be connected to the same switch at
QDR speeds.

I didn't mention this in my original email, but I installed a different
file system (BeeGFS) on the same InfiniBand hardware and was able to write
a 863 GB file at 1750.052 MB/s.

$ ibv_devinfo
hca_id: qib0
transport: InfiniBand (0)
fw_ver: 0.0.0
node_guid: 0011:7500:0070:5ed4
sys_image_guid: 0011:7500:0070:5ed4
vendor_id: 0x1175
vendor_part_id: 29474
hw_ver: 0x2
board_id: InfiniPath_QLE7340
phys_port_cnt: 1
port: 1
state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
active_mtu: 2048 (4)
sm_lid: 812
port_lid: 718
port_lmc: 0x00
link_layer: InfiniBand

-Nick

On Fri, Oct 7, 2016 at 5:43 PM, vithanousek <[email protected]> wrote:

> Hi,
>
> I was solving a similar problem about two years ago. And I think, that our
> problem was with InfiniBand drivers or InfiniBand switch firmware  (some
> version of drivers create unstable connections with "full" speed, some
> drivers makes very slow speed, but all were testing with the same version
> of firmware of switch)
>
> But I didn't test it after IB switch firmware update.
>
> What is a topology of your IB connection and what version of firmware and
> drivers are you using?
>
> V.
>
> ---------- Původní zpráva ----------
> Od: Nicholas Mills <[email protected]>
> Komu: [email protected]
> Datum: 7. 10. 2016 22:17:07
> Předmět: [Pvfs2-users] Slow performance with InfiniBand
>
> All,
>
> I'm having an issue with OrangeFS (trunk and v.2.9.5) performance on a
> cluster with QLogic QLE 7340 HCAs. I set up a file system on 8 nodes after
> configuring --with-openib  and --without-bmi-tcp. I mounted this file
> system on a 9th node (the client) and wrote a 1 GB file, but the speed was
> only 13.028 MB/s. If I re-install OrangeFS with TCP I can get 1175.388 MB/s
> when transferring a 863 GB file on the same nodes.
>
> I also have access to another cluster with Mellanox MX354A HCAs. On this
> cluster I could get 492.629 MB/s when writing a 1 GB file with 
> OrangeFS/InfiniBand.
> I'm wondering if there is an issue with BMI on QLogic HCAs.
>
> Thanks,
>
> Nick Mills
> Graduate Research Assistant
> Clemson University
> _______________________________________________
> Pvfs2-users mailing list
> [email protected]
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to