for Qlogic the script works but then there is some other parameter to
change in the peer credits value otherwise Lustre will complane and it
would not work.
At lest this is for my old Qlogic QDR cards.
I do not know if this does apply for newer Qlogic too.
I'll write a patch to the script that wil
On 8/23/17 7:39 AM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
>> On Aug 22, 2017, at 7:14 PM, Riccardo Veraldi
>> wrote:
>>
>> On 8/22/17 9:22 AM, Mannthey, Keith wrote:
>>> Younot expected.
>>>
>> yes they are automatically used on my Mellanox and the script ko2iblnd-probe
>> seems like not wor
On Aug 23, 2017, at 08:39, Mohr Jr, Richard Frank (Rick Mohr)
wrote:
>
>
>> On Aug 22, 2017, at 7:14 PM, Riccardo Veraldi
>> wrote:
>>
>> On 8/22/17 9:22 AM, Mannthey, Keith wrote:
>>> Younot expected.
>>>
>> yes they are automatically used on my Mellanox and the script ko2iblnd-probe
>> s
> On Aug 22, 2017, at 7:14 PM, Riccardo Veraldi
> wrote:
>
> On 8/22/17 9:22 AM, Mannthey, Keith wrote:
>> Younot expected.
>>
> yes they are automatically used on my Mellanox and the script ko2iblnd-probe
> seems like not working properly.
The ko2iblnd-probe script looks in /sys/class/infin
thanks
Rick
>
>
> Thanks,
>
> Keith
>
>
>
> *From:*lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org]
> *On Behalf Of *Chris Horn
> *Sent:* Monday, August 21, 2017 12:40 PM
> *To:* Riccardo Veraldi ; Arman
> Khalatyan
> *Cc:* lustr
Behalf
Of Chris Horn
Sent: Monday, August 21, 2017 12:40 PM
To: Riccardo Veraldi ; Arman Khalatyan
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Lustre poor performance
The ko2iblnd-opa settings are tuned specifically for Intel OmniPath. Take a
look at the /usr/sbin
at 5:00 PM
To: Arman Khalatyan
Cc: "lustre-discuss@lists.lustre.org"
Subject: Re: [lustre-discuss] Lustre poor performance
I ran again my Lnet self test and this time adding --concurrency=16 I can use
all of the IB bandwith (3.5GB/sec).
the only thing I do not understand is why ko2
_cm,ib_cm,iw_cm,rpcrdma,ko2iblnd,mlx4_ib,ib_srp,ib_ucm,ib_iser,ib_srpt,ib_umad,ib_uverbs,rdma_ucm,ib_ipoib,ib_isert
>> sunrpc334343 17
>> nfs,nfsd,rpcsec_gss_krb5,auth_rpcgss,lockd,nfsv4,rpcrdma,nfs_acl
>>
>> I do not know where to look to have Ln
ed?
>>
>> 2. Lnet_selfttest: Please see " Chapter 28. Testing Lustre Network
>> Performance (LNet Self-Test)" in the Lustre manual if this is a new test for
>> you.
>> This will help show how much Lnet bandwith you have from your single
&
f-Test)" in the Lustre manual if this is a new test for
>> you.
>> This will help show how much Lnet bandwith you have from your single
>> client. There are tunable in the lnet later that can affect things. Which
>> QRD HCA are you using?
>>
>>
-
From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org
] On Behalf Of Riccardo
Veraldi
Sent: Thursday, August 17, 2017 10:48 PM
To: Dennis Nelson ;
lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Lustre poor performance
this is my lustre.conf
[drp-tst-ffb01:~]$ cat /etc/
46022 1 rdma_cm
>> ib_core 210381 15
>> rdma_cm,ib_cm,iw_cm,rpcrdma,ko2iblnd,mlx4_ib,ib_srp,ib_ucm,ib_iser,ib_srpt,ib_umad,ib_uverbs,rdma_ucm,ib_ipoib,ib_isert
>> sunrpc334343 17
>> nfs,nfsd,rpcsec_gss_krb5,auth_rpcgss,lockd,nfsv4,rpcrdma,nfs_a
lnd,mlx4_ib,ib_srp,ib_ucm,ib_iser,ib_srpt,ib_umad,ib_uverbs,rdma_ucm,ib_ipoib,ib_isert
>> sunrpc334343 17
>> nfs,nfsd,rpcsec_gss_krb5,auth_rpcgss,lockd,nfsv4,rpcrdma,nfs_acl
>>
>> I do not know where to look to have Lnet performing faster. I am
>> running
rp,ib_ucm,ib_srpt,ib_ipoib
>> iw_cm 46022 1 rdma_cm
>> ib_core 210381 15
>> rdma_cm,ib_cm,iw_cm,rpcrdma,ko2iblnd,mlx4_ib,ib_srp,ib_ucm,ib_iser,ib_srpt,ib_umad,ib_uverbs,rdma_ucm,ib_ipoib,ib_isert
>> sunrpc334343 17
>>
Nelson <mailto:dnel...@ddn.com>;
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre poor performance
this is my lustre.conf
[drp-tst-ffb01:~]$ cat /etc/modprobe.d/lustre.conf options lnet
networks=o2ib5(ib0),tcp5(enp1s0f0)
data tra
Keith
> -Original Message-
> From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On
> Behalf Of Riccardo Veraldi
> Sent: Thursday, August 17, 2017 10:48 PM
> To: Dennis Nelson ; lustre-discuss@lists.lustre.org
> Subject: Re: [lustre-discuss] Lustre poor
> Sent: Friday, August 18, 2017 11:31 AM
> To: Mannthey, Keith ; Dennis Nelson
> ; lustre-discuss@lists.lustre.org
> Subject: Re: [lustre-discuss] Lustre poor performance
>
>
> thank you Keith,
> I will do all this. the single thread dd tests shows 1GB/sec. I will do the
&
ed NVMe/ZFS setup can do at the OBD layer in Lustre.
>
> Thanks,
> Keith
> -Original Message-
> From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org]
> On Behalf Of Riccardo Veraldi
> Sent: Thursday, August 17, 2017 10:48 PM
> To: Dennis Nelson ; lus
at the OBD layer in Lustre.
>
> Thanks,
> Keith
> -Original Message-
> From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On
> Behalf Of Riccardo Veraldi
> Sent: Thursday, August 17, 2017 10:48 PM
> To: Dennis Nelson ; lustre-discuss@lists.lustre.
Dennis Nelson ; lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Lustre poor performance
this is my lustre.conf
[drp-tst-ffb01:~]$ cat /etc/modprobe.d/lustre.conf options lnet
networks=o2ib5(ib0),tcp5(enp1s0f0)
data transfer is over infiniband
ib0: flags=4163 mtu 65520
inet 172.2
this is my lustre.conf
[drp-tst-ffb01:~]$ cat /etc/modprobe.d/lustre.conf
options lnet networks=o2ib5(ib0),tcp5(enp1s0f0)
data transfer is over infiniband
ib0: flags=4163 mtu 65520
inet 172.21.52.83 netmask 255.255.252.0 broadcast 172.21.55.255
On 8/17/17 10:45 PM, Riccardo Veraldi
On 8/17/17 9:22 PM, Dennis Nelson wrote:
> It appears that you are running iozone on a single client? What kind of
> network is tcp5? Have you looked at the network to make sure it is not the
> bottleneck?
>
yes the data transfer is on ib0 interface and I did a memory to memory
test through Inf
On 8/17/17 8:56 PM, Jones, Peter A wrote:
> Riccardo
>
> I expect that it will be useful to know which version of ZFS you are using
apologies for not telling this I Am running 0.7.1
>
> Peter
>
>
>
>
> On 8/17/17, 8:21 PM, "lustre-discuss on behalf of Riccardo Veraldi"
> riccardo.vera...@cnaf.inf
It appears that you are running iozone on a single client? What kind of
network is tcp5? Have you looked at the network to make sure it is not the
bottleneck?
--
Dennis Nelson
Mobile: 817-233-6116
Applications Support Engineer
DataDirect Networks, Inc.
dnel...@ddn.com
On 8/17/17, 10:22 PM,
Riccardo
I expect that it will be useful to know which version of ZFS you are using
Peter
On 8/17/17, 8:21 PM, "lustre-discuss on behalf of Riccardo Veraldi"
wrote:
>Hello,
>
>I am running Lustre 2.10.0 on Centos 7.3
>I have one MDS and two OSSes, each with one OST
>each OST is a ZFS raidz
Hello,
I am running Lustre 2.10.0 on Centos 7.3
I have one MDS and two OSSes, each with one OST
each OST is a ZFS raidz1 with 6 nvme disks each.
The configuration of ZFS is done in a way to allow maximum write
performances:
zfs set sync=disabled drpffb-ost02
zfs set atime=off drpffb-ost02
zfs set
26 matches
Mail list logo