tir. 16. jun. 2020 kl. 15:32 skrev Giovanni Bracco <giovanni.bra...@enea.it
>:

>
> > I would correct MaxMBpS -- put it at something reasonable, enable
> > verbsRdmaSend=yes and
> > ignorePrefetchLUNCount=yes.
>
> Now we have set:
> verbsRdmaSend yes
> ignorePrefetchLUNCount yes
> maxMBpS 8000
>
> but the only parameter which has a strong effect by itself is
>
> ignorePrefetchLUNCount yes
>
> and the readout performance increased of a factor at least 4, from
> 50MB/s to 210 MB/s



That’s interesting.. ignoreprefetchluncount=yes should mean it more
aggresively schedules IO. Did you also try lowering maxMBpS? I’m thinking
maybe something is getting flooded somewhere..

Another knob would be to increase workerThreads, and/or prefetchPct (don’t
quite renember how these influence each other).

And it would be useful to run nsdperf between client and nsd-servers, to
verify/rule out any network issue.


> fio --name=seqwrite --rw=write --buffered=1 --ioengine=posixaio --bs=1m
> --numjobs=1 --size=100G --runtime=60
>
> fio --name=seqread --rw=wread --buffered=1 --ioengine=posixaio --bs=1m
> --numjobs=1 --size=100G --runtime=60
>
>
Not too familiar with fio, but ... does it help to increase numjobs?

And.. do you tell both sides which fabric number they’re on («verbsPorts
qib0/1/1») so the GPFS knows not to try to connect verbsPorts that can’t
communicate?


  -jf
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to