[Lustre-discuss] Questions about MDS flow

2011-07-01 Thread Jianwei Liao
Dear all,
Though I have read the lustre manual roughly, but i am wondering about 
the MDS operation flow:
1) Clients get the layout and capabilities from MDS, then do IO 
operation. While the client modify the stripes, then the modification 
time(Mtime) should be updated. Who(clients or OSSs) and When(after 
closing the file or after the modification ) does send the update 
request to MDS.

2) I have read several documents, but about unlinking a file, after 
unlinking all stripes, who does send the message about all stripe have 
been unlinked. One says, it is the clients who want to delte that 
file(in the manual), but another one says, it is the OSTs(in Xyratex 
Lustre Architecture Priorities Overview)?

3) During the creation, the MDS may ask the OSSs to allocation the 
available stripes, then the clients can write or append the data. But 
who does keep the information of available space in the stripe? For 
example, while all stripe are used up, so, another new stripe(or 
stripes) is needed, how do things go in this situation? (first who find 
there is no available space, and then send request to MDS to allocation 
a new stripe?)

4) Is the opened_file list kept in the acitve MDS' memory?

Maybe the description is involoved, does anyone give me some hints.
Thank you very much,

Best regards,
Liao

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Need help

2011-07-01 Thread Mervini, Joseph A
Hi,

I just upgraded our servers from RHEL 5.4 - RHEL 5.5 and went from lustre 
1.8.3 to 1.8.5. 

Now when I try to mount the OSTs I'm getting:

[root@aoss1 ~]# mount -t lustre /dev/disk/by-label/scratch2-OST0001 
/mnt/lustre/local/scratch2-OST0001
mount.lustre: mount /dev/disk/by-label/scratch2-OST0001 at 
/mnt/lustre/local/scratch2-OST0001 failed: No such file or directory
Is the MGS specification correct?
Is the filesystem name correct?
If upgrading, is the copied client log valid? (see upgrade docs)

tunefs.lustre looks okay on both the MDT (which is mounted) and the OSTs:

[root@amds1 ~]# tunefs.lustre /dev/disk/by-label/scratch2-MDT 
checking for existing Lustre data: found CONFIGS/mountdata
Reading CONFIGS/mountdata

   Read previous values:
Target: scratch2-MDT
Index:  0
Lustre FS:  scratch2
Mount type: ldiskfs
Flags:  0x5
  (MDT MGS )
Persistent mount opts: errors=panic,iopen_nopriv,user_xattr,maxdirsize=2000
Parameters: lov.stripecount=4 failover.node=failnode@tcp1 
failover.node=failnode@o2ib1 mdt.group_upcall=/usr/sbin/l_getgroups


   Permanent disk data:
Target: scratch2-MDT
Index:  0
Lustre FS:  scratch2
Mount type: ldiskfs
Flags:  0x5
  (MDT MGS )
Persistent mount opts: errors=panic,iopen_nopriv,user_xattr,maxdirsize=2000
Parameters: lov.stripecount=4 failover.node=failnode@tcp1 
failover.node=failnode@o2ib1 mdt.group_upcall=/usr/sbin/l_getgroups

exiting before disk write.


[root@aoss1 ~]# tunefs.lustre /dev/disk/by-label/scratch2-OST0001
checking for existing Lustre data: found CONFIGS/mountdata
Reading CONFIGS/mountdata

   Read previous values:
Target: scratch2-OST0001
Index:  1
Lustre FS:  scratch2
Mount type: ldiskfs
Flags:  0x2
  (OST )
Persistent mount opts: errors=panic,extents,mballoc
Parameters: mgsnode=mds-server1@tcp1 mgsnode=mds-server1@o2ib1 
mgsnode=mds-server2@tcp1 mgsnode=mds-server2@o2ib1 
failover.node=failnode@tcp1 failover.node=failnode@o2ib1


   Permanent disk data:
Target: scratch2-OST0001
Index:  1
Lustre FS:  scratch2
Mount type: ldiskfs
Flags:  0x2
  (OST )
Persistent mount opts: errors=panic,extents,mballoc
Parameters: mgsnode=mds-server1@tcp1 mgsnode=mds-server1@o2ib1 
mgsnode=mds-server2@tcp1 mgsnode=mds-server2@o2ib1 
failover.node=falnode@tcp1 failover.node=failnode@o2ib1

exiting before disk write.


I am really stuck and could really use some help.

Thanks.

==
 
Joe Mervini
Sandia National Laboratories
Dept 09326
PO Box 5800 MS-0823
Albuquerque NM 87185-0823
 


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Need help

2011-07-01 Thread Cliff White
Did you also install the correct e2fsprogs?
cliffw


On Fri, Jul 1, 2011 at 5:45 PM, Mervini, Joseph A jame...@sandia.govwrote:

 Hi,

 I just upgraded our servers from RHEL 5.4 - RHEL 5.5 and went from lustre
 1.8.3 to 1.8.5.

 Now when I try to mount the OSTs I'm getting:

 [root@aoss1 ~]# mount -t lustre /dev/disk/by-label/scratch2-OST0001
 /mnt/lustre/local/scratch2-OST0001
 mount.lustre: mount /dev/disk/by-label/scratch2-OST0001 at
 /mnt/lustre/local/scratch2-OST0001 failed: No such file or directory
 Is the MGS specification correct?
 Is the filesystem name correct?
 If upgrading, is the copied client log valid? (see upgrade docs)

 tunefs.lustre looks okay on both the MDT (which is mounted) and the OSTs:

 [root@amds1 ~]# tunefs.lustre /dev/disk/by-label/scratch2-MDT
 checking for existing Lustre data: found CONFIGS/mountdata
 Reading CONFIGS/mountdata

   Read previous values:
 Target: scratch2-MDT
 Index:  0
 Lustre FS:  scratch2
 Mount type: ldiskfs
 Flags:  0x5
  (MDT MGS )
 Persistent mount opts:
 errors=panic,iopen_nopriv,user_xattr,maxdirsize=2000
 Parameters: lov.stripecount=4 failover.node=failnode@tcp1
 failover.node=failnode@o2ib1 mdt.group_upcall=/usr/sbin/l_getgroups


   Permanent disk data:
 Target: scratch2-MDT
 Index:  0
 Lustre FS:  scratch2
 Mount type: ldiskfs
 Flags:  0x5
  (MDT MGS )
 Persistent mount opts:
 errors=panic,iopen_nopriv,user_xattr,maxdirsize=2000
 Parameters: lov.stripecount=4 failover.node=failnode@tcp1
 failover.node=failnode@o2ib1 mdt.group_upcall=/usr/sbin/l_getgroups

 exiting before disk write.


 [root@aoss1 ~]# tunefs.lustre /dev/disk/by-label/scratch2-OST0001
 checking for existing Lustre data: found CONFIGS/mountdata
 Reading CONFIGS/mountdata

   Read previous values:
 Target: scratch2-OST0001
 Index:  1
 Lustre FS:  scratch2
 Mount type: ldiskfs
 Flags:  0x2
  (OST )
 Persistent mount opts: errors=panic,extents,mballoc
 Parameters: mgsnode=mds-server1@tcp1 mgsnode=mds-server1@o2ib1
 mgsnode=mds-server2@tcp1 mgsnode=mds-server2@o2ib1
 failover.node=failnode@tcp1 failover.node=failnode@o2ib1


   Permanent disk data:
 Target: scratch2-OST0001
 Index:  1
 Lustre FS:  scratch2
 Mount type: ldiskfs
 Flags:  0x2
  (OST )
 Persistent mount opts: errors=panic,extents,mballoc
 Parameters: mgsnode=mds-server1@tcp1 mgsnode=mds-server1@o2ib1
 mgsnode=mds-server2@tcp1 mgsnode=mds-server2@o2ib1
 failover.node=falnode@tcp1 failover.node=failnode@o2ib1

 exiting before disk write.


 I am really stuck and could really use some help.

 Thanks.

 ==

 Joe Mervini
 Sandia National Laboratories
 Dept 09326
 PO Box 5800 MS-0823
 Albuquerque NM 87185-0823



 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss




-- 
cliffw
Support Guy
WhamCloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] inode tuning on shared mdt/mgs

2011-07-01 Thread Aaron Everett
Hi list,

I'm trying to increase the number of inodes available on our shared mdt/mgs.
I've tried reformatting using the following:

 mkfs.lustre --fsname fdfs --mdt --mgs --mkfsoptions=-i 2048 --reformat
/dev/sdb

The number of inodes actually decreased when I specified -i 2048 vs. leaving
the number at default.

We have a large number of smaller files, and we're nearing our inode limit
on our mdt/mgs. I'm trying to find a solution before simply expanding the
RAID on the server. Since there is plenty of disk space, changing the bytes
per inode seemed like a simple solution.

From the docs:

Alternately, if you are specifying an absolute number of inodes, use
the-N number
of inodes option. You should not specify the -i option with an inode ratio
below one inode per 1024 bytes in order to avoid unintentional mistakes.
Instead, use the -N option.

What is the format of the -N flag, and how should I calculate the number to
use? Thanks for your help!

Aaron
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss