David,
You are right, there is a lock. As Patrick mentioned,
https://jira.hpdd.intel.com/browse/LU-1669 will solve your problems. Please
check it out.
In my own experience, Lustre 2.7.0 client does solve such problem very
well, and I got a very good performance so far.
Regards,
Cuong
On Wed, Ma
We do use checksums, but can't turn it off. It know we've measured some
performance penalty with checksums. I'll check about configuring lustre
clients to to use RDMA. We ran into something similar where our MPI
programs were not taking advantage of the infini-band - we noticed much
slower mess
Ah. I think I know what¹s going on here:
In Lustre 2.x client versions prior to 2.6, only one process on a given
client can write to a given file at a time, regardless of how the file is
striped. So if you are writing to the same file, there will be little to
no benefit of putting an extra proce
> On May 19, 2015, at 1:44 PM, Schneider, David A.
> wrote:
>
> Thanks for the suggestion! When I had each rank run on a separate compute
> node/host, I saw parallel performance (4 seconds for the 6GB of writing).
> When I ran the MPI job on one host (the hosts have 12 cores, by default we
>
nt.
best,
David Schneider
From: Mohr Jr, Richard Frank (Rick Mohr) [rm...@utk.edu]
Sent: Tuesday, May 19, 2015 9:15 AM
To: Schneider, David A.
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] problem getting high performance output to si
David Schneider
From: Jeff Johnson [jeff.john...@aeoncomputing.com]
Sent: Tuesday, May 19, 2015 9:11 AM
To: Schneider, David A.; Patrick Farrell; John Bauer;
lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] problem getting high performance output to single
fil
> On May 19, 2015, at 11:40 AM, Schneider, David A.
> wrote:
>
> When working from hdf5 and mpi, I have seen a number of references about
> tuning parameters, I haven't dug into this yet. I first want to make sure
> lustre has the high output performance at a basic level. I tried to write a C
el5
best,
David Schneider
From: Patrick Farrell [p...@cray.com]
Sent: Tuesday, May 19, 2015 9:03 AM
To: Schneider, David A.; John Bauer; lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] problem getting high performance output to single
fil
, 2015 9:03 AM
To: Schneider, David A.; John Bauer; lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] problem getting high performance output to single
file
For the clients, cat /proc/fs/lustre/version
For the servers, it¹s the same, but presumably you don¹t have access.
On 5/19/15
15 8:52 AM
>To: lustre-discuss@lists.lustre.org
>Subject: Re: [lustre-discuss] problem getting high performance output to
>single file
>
>David
>
>You note that you write a 6GB file. I suspect that your Linux systems
>have significantly more memory than 6GB meaning your fil
Bauer [bau...@iodoctors.com]
Sent: Tuesday, May 19, 2015 8:52 AM
To: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] problem getting high performance output to single
file
David
You note that you write a 6GB file. I suspect that your Linux systems
have significantly more memory tha
David
You note that you write a 6GB file. I suspect that your Linux systems
have significantly more memory than 6GB meaning your file will end being
cached in the system buffers. It wont matter how many OSTs you use as
you probably are not measuring the speed to the OST's, but rather, you
a
Which versions are you using in servers and clients?
On Wed, May 20, 2015 at 12:40 AM, Schneider, David A. <
david...@slac.stanford.edu> wrote:
> I am trying to get good performance with parallel writing to one file
> through MPI. Our cluster has high performance when I write to separate
> files,
I am trying to get good performance with parallel writing to one file through
MPI. Our cluster has high performance when I write to separate files, but when
I use one file - I see very little performance increase.
As I understand, our cluster defaults to use one OST per file. There are many
OST
14 matches
Mail list logo