On Dec 10, 2012, at 11:43 AM, Grigory Shamov wrote:
> I wonder how would be a best strategy to change the striping now. I
> understand that if I just change the stripe count on the Lustre root dir, it
> will affect only newly created files/directories. Should I copy the user's
> files, stripe t
From: Mark Day
Subject: Re: [Lustre-discuss] noatime or atime_diff for Lustre 1.8.7?
To: "Mohr Jr, Richard Frank (Rick Mohr)"
Cc: lustre-discuss@lists.lustre.org, "Grigory Shamov"
Date: Friday, December 7, 2012, 4:22 PM
#yiv2002087058 p {margin:0;}> 2) Make sure caching
)"
Cc: lustre-discuss@lists.lustre.org
Sent: Saturday, 8 December, 2012 10:52:28 AM
Subject: Re: [Lustre-discuss] noatime or atime_diff for Lustre 1.8.7?
> 2) Make sure caching is enabled on the oss.
How do you check/enable for this? Is it not enabled by default?
Cheers, Mark
day, 8 December, 2012 5:19:31 AM
Subject: Re: [Lustre-discuss] noatime or atime_diff for Lustre 1.8.7?
On Dec 6, 2012, at 2:58 PM, Grigory Shamov wrote:
> So, on one of our OSS servers the load is now 160. According to collectl,
> only one OST does most of the job. (We dont do striping o
On Dec 6, 2012, at 2:58 PM, Grigory Shamov wrote:
> So, on one of our OSS servers the load is now 160. According to collectl,
> only one OST does most of the job. (We dont do striping on this FS; unless
> users to it manually on their subdirectories).
This sounds similar to situations we see ev
I/O, or something of this sort to happen?
>
>
>
> --
> Grigory Shamov
>
>
> --- On Thu, 12/6/12, Colin Faber wrote:
>
>> From: Colin Faber
>> Subject: Re: [Lustre-discuss] noatime or atime_diff for Lustre 1.8.7?
>> To: "Grigory Shamov"
>> Cc: l
es it mean that we have small 4K I/O,
which is 34% for reads and 44 for writes and is the cause of the problem?
--
Grigory Shamov
--- On Thu, 12/6/12, Dilger, Andreas wrote:
> From: Dilger, Andreas
> Subject: Re: [Lustre-discuss] noatime or atime_diff for Lustre 1.8.7?
> To: "Gr
ber
> Subject: Re: [Lustre-discuss] noatime or atime_diff for Lustre 1.8.7?
> To: "Grigory Shamov"
> Cc: lustre-discuss@lists.lustre.org
> Date: Thursday, December 6, 2012, 11:28 AM
> Hi,
>
> The messages indicate overloaded backend storage. You could
> try this,
On 12/6/12 12:06 PM, "Grigory Shamov" wrote:
>Hi,
>
>On our cluster, when there is a load on Lustre FS, at some points it
>slows down precipitously, and there are very very many "slow IO " and
>"slow setattr" messages on the OSS servers:
>
>===
>[2988758.408968] Lustre: scratch-OST0004: slow
Hi,
The messages indicate overloaded backend storage. You could try this,
another option may be to statically set the maximum number of threads on
the OSS, this should reduce load to the system and push the backlogs to
your clients (hopefully)
-cf
On 12/06/2012 12:06 PM, Grigory Shamov wrote
Hi,
On our cluster, when there is a load on Lustre FS, at some points it slows down
precipitously, and there are very very many "slow IO " and "slow setattr"
messages on the OSS servers:
===
[2988758.408968] Lustre: scratch-OST0004: slow i_mutex 51s due to heavy IO load
[2988758.408974] Lus
11 matches
Mail list logo