Thomas,
If you are positive that the two sets of clients are not reading files on other
on the OSTs, I don't think there is anything at the Lustre level that
communicates between OSSes to balance traffic or anything like that.
One possibility is congestion control at the network level, possibly
I think this might be of some interest
https://github.com/hpc/mpifileutils
On 1/23/20 4:33 PM, Bernd Melchers wrote:
> Hi All,
> we are copying large data sets within our lustre filesystem and between
> lustre and an external nfs server. In both cases the performance is
> unexpected low and the re
Hi All,
we are copying large data sets within our lustre filesystem and between
lustre and an external nfs server. In both cases the performance is
unexpected low and the reason seems that rsync is reading and writing in
32 kB Blocks, whereas our lustre would be more happy with 4 MB
Blocks.
rsync h
Hi all,
Lustre 2.10.6, 45 OSS with 7 OSTs each on ZFS 0.7.9, 3 MDTs (ldiskfs), clients 2.10 and 2.12.
Infiniband network, Mellanox FDR w half bisectional bandwidth.
A sample of ~250.000 files, stripe count 1, average size 100 MB. is read with dd,
output > /dev/null.
The location of the files