On Wed, Jan 6, 2010 at 3:35 PM, Tung Dam lexlutho...@gmail.com wrote:
Hi list
I have an issue with lustre log from our MDS, like this:
*
Jan 6 14:00:49 MDS1 kernel: Lustre:
19141:0:(ldlm_lockd.c:808:ldlm_server_completion_ast()) ### enqueue waittook
164s from 1262761084 ns:
On Monday 04 January 2010 20:42:12 Andreas Dilger wrote:
On 2010-01-04, at 03:02, David Cohen wrote:
I'm using a mixed environment of 1.8.0.1 MDS and 1.6.6 OSS's (had a
problem
with qlogic drivers and rolled back to 1.6.6).
My MDS get unresponsive each day at 4-5 am local time, no kernel
On 2010-01-06, at 01:42, Tung Dam wrote:
I have an issue with lustre log from our MDS, like this:
Jan 6 14:00:49 MDS1 kernel: Lustre: 19141:0:(ldlm_lockd.c:
808:ldlm_server_completion_ast()) ### enqueue wait took 164s from
1262761084 ns: mds-lustre-MDT_UUID lock:
The bad news is that i'm using lustre 1.8, do you have any idea ? Could you
tell me that how it could affect to our system or it's just a harmless
warning ?
Many thanks
This warning has been removed in the 1.8 release of Lustre. In
particular, with FLK (flock) type locks, they can be held
Is there an ETA till Lustre 2.0 GA? I haven't been able to find a
concrete date for its availability. Any help would be much appreciated.
--
Jason Gates
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
On Wed, 2010-01-06 at 11:25 +0200, David Cohen wrote:
I was indeed the *locate update, a simple edit of /etc/updatedb.conf on the
clients and the system is stable again.
Great. But as Andreas said previously, load should not have caused the
LBUG that you got. Could you open a bug on our
Hi list
I have an issue with lustre log from our MDS, like this:
*
Jan 6 14:00:49 MDS1 kernel: Lustre:
19141:0:(ldlm_lockd.c:808:ldlm_server_completion_ast()) ### enqueue
waittook 164s from 1262761084 ns: mds-lustre-MDT_UUID lock:
8101b6265a00/0x874ddc3670da987c lrc: 3/0,0 mode: PW/PW
Hi Guys,
I would like to monitor the performance and usage of my Lustre filesystem
and was wondering what are the commonly used monitoring tools for this?
Cacti? Nagios? Any input would be greatly appreciated.
Regards,
-Simran
___
Lustre-discuss
Jagga Soorma wrote:
Hi Guys,
I would like to monitor the performance and usage of my Lustre
filesystem and was wondering what are the commonly used monitoring tools
for this? Cacti? Nagios? Any input would be greatly appreciated.
Regards,
-Simran
LLNL's LMT tool is very good. It's
Last time I checked, LMT was designed for Lustre 1.4. LLNL stopped development
of LMT some time ago. Not sure if LMT will work with Lustre 1.8. If somebody
has tried, please let everyone know.
jab
-Original Message-
From: lustre-discuss-boun...@lists.lustre.org
Jeffrey Bennett wrote:
Last time I checked, LMT was designed for Lustre 1.4. LLNL stopped
development of LMT some time ago. Not sure if LMT will work with Lustre 1.8.
If somebody has tried, please let everyone know.
Ah, it has moved to Google:
http://code.google.com/p/lmt/
The current
I'm using LMT with 1.8 on our test system and it seems to be OK.
We're still 1.6.6 in production so it hasn't been extensively tested with 1.8.
Jim
On Wed, Jan 06, 2010 at 11:23:54AM -0800, Cliff White wrote:
Jeffrey Bennett wrote:
Last time I checked, LMT was designed for Lustre 1.4. LLNL
On 2010-01-06, at 03:49, Tung Dam wrote:
The bad news is that i'm using lustre 1.8, do you have any idea ?
Could you tell me that how it could affect to our system or it's
just a harmless warning ?
It is just a harmless warning. The message is gone in CVS, err Git,
but I don't know when
Happy New Year to One and All!
My OSS Operating System CentOS 5 Lustre 1.6.4.3smp had an OS failure
on 28 Dec 2009. The failure was a software error which had nothing
to do with Lustre or the physical hard disks. I fixed the failure
(my typo) on the OS and restarted when I returned from my
--
An HTML attachment was scrubbed...
URL:
http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100106/c45e5a90/attachment-0001.html
--
___
Lustre-discuss mailing list
Lustre-discuss
15 matches
Mail list logo