Hello,
Replying to myself. No we couldn't get lustre up again and had to reinstall
from scratch.
:-(
Keeping fingers crossed now we are running the productive system
What bugs us is this part of the message on the MDS:
Aug 13 11:18:54 sadosrd20 LustreError: 15c-8: [EMAIL PROTECTED]: The
Am Dienstag, 19. August 2008 01:59:09 schrieb Andreas Dilger:
On Aug 18, 2008 15:46 +0200, Heiko Schroeter wrote:
from time to time we see these messages on our MDS 1.6.5.1 during
copying data onto lustre.
Is this just informational or an indicator of a broken setup ? Network
load
On Tuesday 19 August 2008 03:57:55 Brock Palen wrote:
Ignore,
After days of banging head against the wall and trying to use
tunefs.lustre which appears to maybe suffer from the same bug. I
found the alternative that specifying --mgsnode= more than once was
valid. This mixed with the
On Aug 14, 2008 10:09 -0400, Brock Palen wrote:
Yes the patch in bugzilla is what I wanted.
I don't know if there is a way to push this into the source on suns
website. It could be a switch in there build script, which is just a
#ifdef in the source. Shouldn't be to hard.
There is a
Anyone?
On Sun, Aug 17, 2008 at 6:41 PM, Mag Gam [EMAIL PROTECTED] wrote:
Hey,
My lustre install has created a bunch of files in /tmp/lustre-log.log
I am trying to see what caused this crash so, I been trying to following this:
Hi
I put the journal on LVM over multipath for my OSTs
formating the OST was made with
mkfs.lustre -v --reformat --fsname=datafs --ost --mgsnode=10.0.0.1
--mgsnode=10.0.0.2 --failover=10.0.0.3 --mkfsoptions=-E stride=64 -E
stripe-width=9 -J device=LABEL=JNL_OST0_DATA -i 131072
Since this got no response, I wanted to add that fundamentally we just want
to get group quota limits. So right now we are reduced to calling lfs quota
-g group and then parsing the output of that call to something sensible and
printing to screen. Is this the expected protocol? Any thought on how
Hello Andreas,
My apologies for not explaining myself. :-)
The trusted computing standards I'm talking about (there are a few, some
good, some not so much) are effectively based on US Department of Defense C2
(aka Orange Book) security standards:
http://en.wikipedia.org/wiki/TCSEC
The best
On Sun, 2008-08-17 at 15:50 -0700, Mag Gam wrote:
My departments Lustre's lfs df -i looks like this
UUIDInodes IUsed IFree IUse% Mounted on
xfs001-MDT_UUID 82565322 19073586 63491736 23%
/xfs/engine1/xfs001[MDT:0]
xfs001-OST_UUID 41943040
On Fri, 2008-08-15 at 07:16 -0400, Mag Gam wrote:
I am doing a case study at my university and I am trying to analyze
packets for LNET. I want to compare this with other Network based
filesystems, such as NFS and SMB. I plan on using tcpdump to get
capture LNET packets, but I am not sure what
Hello,
Occasionally when we put a client, typically a head node, under very
heavy load, it freezes all operations on the Lustre mount and requires a
hard reboot before the mount is usable again. The symptoms look similar
to the statahead problem observed by others, but I was under the
On Aug 19, 2008 20:27 -0400, Mag Gam wrote:
Yes, I have looked thru the lists but I could not really get this
question answered. tune2fs is a ext2/3 setting, and I though each file
I create on a Lustre filesystem an inode gets created on the MDS and
the OST as an object.
But, I don't
12 matches
Mail list logo