Hi,
I have a couple of questions about these stats. If these are documented
somewhere, by all means point me to them. What I found in the operations
manual and on the web did not answer my questions.
What do
read_bytes25673 samples [bytes] 1 3366225 145121869
write_bytes
Greetings List!
What is the number from the command "lfs data_version $filename " telling
me?
I do not see "data_version" documented in lfs -h, man lfs, nor in lustre
manual..
I do know that if I have a zero-length file my robinhood scan of my Lustre
mount point indicates that "lfs get_version"
Simpler yet, I believe you can just manually set the OST index on which you
would like the file to reside.
lfs setstripe -c 1 -i 0 file_on_ost0
lfs setstripe -c 1 -i 1 file_on_ost1
...
Shawn
On 4/5/18, 3:42 PM, "lustre-discuss on behalf of Scott Denham"
wrote:
>From: John Bent
>To:
>From: John Bent
>To: John Bauer
>Subject: Re: [lustre-discuss] varying sequential read performance.
> "I suspect that this OSC is using an OSS that is under heavier load."
> If you want to confirm this, it seems like you could create files with
> striping parameters such that you have a single
"I suspect that this OSC is using an OSS that is under heavier load."
If you want to confirm this, it seems like you could create files with
striping parameters such that you have a single file on each OSS. Well, I
know you can make stripe=1 so it's only on one OSS but can you
control/query on *w
> On Apr 5, 2018, at 11:31 AM, John Bauer wrote:
>
> I don't have access to the OSS so I cant report on the Lustre settings. I
> think the client side max cached is 50% of memory.
Looking at your cache graph, that looks about right.
> After speaking with Doug Petesch of Cray, I though I wou
Rick,
Thanks for reply. Also thanks to Patrick Farrell for making me rethink
this.
I am coming to believe that it is an OSS issue. Every time I run this
job, the first pass of dd is slow, which I now attribute to all the OSSs
needing to initially read the data in from disk to OSS cache. If
John,
I had a couple of thoughts (though not sure if they are directly relevant to
your performance issue):
1) Do you know what caching settings are applied on the lustre servers? This
could have an impact on performance, especially if your tests are being run
while others are doing IO on the
Hi,
I have a question about the new Data on MDT feature.
The default dom_stripesize is 1M, does this mean that smaller files will
also consume 1M on the MDT ?
I was thinking of using this for my home dirs, but there are a lot of
smaller files there, so maybe dom_stripesize=64k would be better.