Bent
>To: John Bauer
>Subject: Re: [lustre-discuss] varying sequential read performance.
> "I suspect that this OSC is using an OSS that is under heavier load."
> If you want to confirm this, it seems like you could create files with
> st
>From: John Bent
>To: John Bauer
>Subject: Re: [lustre-discuss] varying sequential read performance.
> "I suspect that this OSC is using an OSS that is under heavier load."
> If you want to confirm this, it seems like you could create files with
> striping par
"I suspect that this OSC is using an OSS that is under heavier load."
If you want to confirm this, it seems like you could create files with
striping parameters such that you have a single file on each OSS. Well, I
know you can make stripe=1 so it's only on one OSS but can you
control/query on *w
> On Apr 5, 2018, at 11:31 AM, John Bauer wrote:
>
> I don't have access to the OSS so I cant report on the Lustre settings. I
> think the client side max cached is 50% of memory.
Looking at your cache graph, that looks about right.
> After speaking with Doug Petesch of Cray, I though I wou
Rick,
Thanks for reply. Also thanks to Patrick Farrell for making me rethink
this.
I am coming to believe that it is an OSS issue. Every time I run this
job, the first pass of dd is slow, which I now attribute to all the OSSs
needing to initially read the data in from disk to OSS cache. If
John,
I had a couple of thoughts (though not sure if they are directly relevant to
your performance issue):
1) Do you know what caching settings are applied on the lustre servers? This
could have an impact on performance, especially if your tests are being run
while others are doing IO on the
complete first. This
is subtler but it’s the same effect.
- Patrick
From: lustre-discuss on behalf of
John Bauer
Sent: Tuesday, April 3, 2018 1:23:30 AM
To: Colin Faber
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] varying sequential read
Colin
Since I do not have root privileges on the system, I do not have access
to dropcache. So, no, I do not flush cache between the dd runs. The 10
dd runs were done in a single
job submission and the scheduler does dropcache between jobs, so the
first of the dd passes does start with a vir
Are you flushing cache between test runs?
On Mon, Apr 2, 2018, 6:06 PM John Bauer wrote:
> I am running dd 10 times consecutively to read a 64GB file (
> stripeCount=4 stripeSize=4M ) on a Lustre client(version 2.10.3) that has
> 64GB of memory.
> The client node was dedicated.
>
>
>
>
>
> *for
I am running dd 10 times consecutively to read a 64GB file (
stripeCount=4 stripeSize=4M ) on a Lustre client(version 2.10.3) that
has 64GB of memory.
The client node was dedicated.
*for pass in 1 2 3 4 5 6 7 8 9 10
do
of=/dev/null if=${file} count=128000 bs=512K
done
*
Instrumentation of t
10 matches
Mail list logo