David,
You are right, there is a lock. As Patrick mentioned,
https://jira.hpdd.intel.com/browse/LU-1669 will solve your problems. Please
check it out.
In my own experience, Lustre 2.7.0 client does solve such problem very
well, and I got a very good performance so far.
Regards,
Cuong
On Wed, Ma
We do use checksums, but can't turn it off. It know we've measured some
performance penalty with checksums. I'll check about configuring lustre
clients to to use RDMA. We ran into something similar where our MPI
programs were not taking advantage of the infini-band - we noticed much
slower mess
Ah. I think I know what¹s going on here:
In Lustre 2.x client versions prior to 2.6, only one process on a given
client can write to a given file at a time, regardless of how the file is
striped. So if you are writing to the same file, there will be little to
no benefit of putting an extra proce
> On May 19, 2015, at 1:44 PM, Schneider, David A.
> wrote:
>
> Thanks for the suggestion! When I had each rank run on a separate compute
> node/host, I saw parallel performance (4 seconds for the 6GB of writing).
> When I ran the MPI job on one host (the hosts have 12 cores, by default we
>
Thanks for the suggestion! When I had each rank run on a separate compute
node/host, I saw parallel performance (4 seconds for the 6GB of writing). When
I ran the MPI job on one host (the hosts have 12 cores, by default we pack
ranks onto as few hosts as possible), things happened serially, each
Hi Jeff,
I know we have infini-band, however when I ran lctl, what I see (maybe I should
not put our ip addresses on the internet, so I'll xxx them out) is
.xx.xx.xx@tcp2
.xx.xx.xx@tcp
unfortunately, I'm not sure how to look at the interface for these types, maybe
they are in turn conn
> On May 19, 2015, at 11:40 AM, Schneider, David A.
> wrote:
>
> When working from hdf5 and mpi, I have seen a number of references about
> tuning parameters, I haven't dug into this yet. I first want to make sure
> lustre has the high output performance at a basic level. I tried to write a C
David,
What interconnect are you using for Lustre? ( IB/o2ib [fdr,qdr,ddr],
Ethernet/tcp [40GbE,10Gbe,1GbE] ). You can run 'lctl list_nids' and see
what protocol lnet is binding to, then look at that interface for the
specific type.
Also, do you know anything about the server side of your Lu
Thanks, for the client, where I am running from, I have
$ cat /proc/fs/lustre/version
lustre: 2.1.6
kernel: patchless_client
build: jenkins--PRISTINE-2.6.18-348.4.1.el5
best,
David Schneider
From: Patrick Farrell [p...@cray.com]
Sent: Tuesday, May 19,
For the clients, cat /proc/fs/lustre/version
For the servers, it¹s the same, but presumably you don¹t have access.
On 5/19/15, 11:01 AM, "Schneider, David A."
wrote:
>Hi,
>
>My first test was just to do the for loop where I allocate a 4MB buffer,
>initialize it, and delete it. That program ran
Hi,
My first test was just to do the for loop where I allocate a 4MB buffer,
initialize it, and delete it. That program ran at about 6GB/sec. Once I write
to a file, I drop down to 370mb/sec. Our top performance for I/O to one file
has been about 400 mb/sec.
For this question: Which versions a
David
You note that you write a 6GB file. I suspect that your Linux systems
have significantly more memory than 6GB meaning your file will end being
cached in the system buffers. It wont matter how many OSTs you use as
you probably are not measuring the speed to the OST's, but rather, you
a
Which versions are you using in servers and clients?
On Wed, May 20, 2015 at 12:40 AM, Schneider, David A. <
david...@slac.stanford.edu> wrote:
> I am trying to get good performance with parallel writing to one file
> through MPI. Our cluster has high performance when I write to separate
> files,
I am trying to get good performance with parallel writing to one file through
MPI. Our cluster has high performance when I write to separate files, but when
I use one file - I see very little performance increase.
As I understand, our cluster defaults to use one OST per file. There are many
OST
14 matches
Mail list logo