On Oct 11, 2007  15:07 -0500, David Fagan wrote:
> I just don't see in the documentation how to build one large file  
> system across multiple systems I need PB filesystems.

I would suggest starting with the Lustre User Documentation...

> I have two system to play with each with about 36TB on them and I  
> want one large filesystem.
> I have /dev/sdb on each for msg

You only need a single MDT device, and MGS usually shares this device.

> and 4 8TB luns on each for a file system /dev/sdc, sdd, sde, sdf

That is fine.

> Can do the same thing on both system but how do they know to put  
> there together into one big 72TB vs each having a 36TB filesystem.    
> Will this just work because of the mgsnode if they are pointing to  
> the same node?

It looks like you are creating two completely separate filesystems.
You need to point all the OSTs at the same mgsnode for them to join
the same filesystem.

> Start this on both nodes and go home for the day?
> 
> mkfs.lustre --fsname datafs --mdt --mgs --reformat /dev/sdb
> mkdir -p /lustre/mdt/data1
> mount -t lustre /dev/sdb /lustre/mdt/data1
> 
> mkfs.lustre --fsname datafs --ost --reformat -- 
> [EMAIL PROTECTED] /dev/sdc
> mkfs.lustre --fsname datafs --ost --reformat -- 
> [EMAIL PROTECTED] /dev/sdd
> mkfs.lustre --fsname datafs --ost --reformat -- 
> [EMAIL PROTECTED] /dev/sde
> mkfs.lustre --fsname datafs --ost --reformat -- 
> [EMAIL PROTECTED] /dev/sdf
> mkdir -p /lustre/ost/data1/0
> mkdir -p /lustre/ost/data1/1
> mkdir -p /lustre/ost/data1/2
> mkdir -p /lustre/ost/data1/3
> mount -t lustre /dev/sdc /lustre/ost/data1/0
> mount -t lustre /dev/sdd /lustre/ost/data1/1
> mount -t lustre /dev/sde /lustre/ost/data1/2
> mount -t lustre /dev/sdf /lustre/ost/data1/3

I would personally give these better names than "data1/{0,1,2,3}", but
that is up to you.

> mkdir -p /lustre/data1
> mount -t lustre [EMAIL PROTECTED]:/datafs /lustre/data1

This is the right client mount for the above config.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to