[Lustre-discuss] Can one node mount more than one lustre cluster?

2011-03-15 Thread Brian O'Connor

Hi,

What are the constraints on a client mounting more than one lustre
file system?

I realise that a lustre cluster can have more than one file system
configured, but can a client mount different file systems from different
lustre clusters on the same network?;ie.

Assume a Single IB fabric and two Lustre clusters with separate
MGS/MDS/OSS. One lustre is Lincoln and the other is Washington

 list_nids
192.168.1.10@o2ib

 mount -t lustre lincoln:/lustre  /lincoln
 mount -t lustre washington:/lustre /washinton

Is this doable or should they be on separate IB fabrics



-- 
Brian O'Connor
---
SGI Consulting
Email: bri...@sgi.com, Mobile +61 417 746 452
Phone: +61 3 9963 1900, Fax:  +61 3 9963 1902
357 Camberwell Road, Camberwell, Victoria, 3124
AUSTRALIA
http://www.sgi.com/support/services
---

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Can one node mount more than one lustre cluster?

2011-03-15 Thread Shuichi Ihara

Yes, you can mount both filesystems which are created with separate MDS/OSS on 
a single fabric.
e.g. ) assume two filesystems (both filesystem name is 'lustre') are configured 
below.
FS1 - MDS/MGS:192.168.1.1 and OSS: 192.168.1.2
FS2 - MDS/MGS:192.168.1.11 and OSS: 192.168.1.12

You should be able to do on the client.
mount -t lustre 192.168.1.1@o2ib:/lustre /lustre1
mount -t lustre 192.168.1.11@o2ib:/lustre /lustre2

btw, even you configured two filesystems on same MDS/OSS by different 
filesystem name, you can mount both on the client.

Thanks
Ihara

On 3/15/11 3:15 PM, Brian O'Connor wrote:

 Hi,

 What are the constraints on a client mounting more than one lustre
 file system?

 I realise that a lustre cluster can have more than one file system
 configured, but can a client mount different file systems from different
 lustre clusters on the same network?;ie.

 Assume a Single IB fabric and two Lustre clusters with separate
 MGS/MDS/OSS. One lustre is Lincoln and the other is Washington

   list_nids
 192.168.1.10@o2ib

   mount -t lustre lincoln:/lustre  /lincoln
   mount -t lustre washington:/lustre /washinton

 Is this doable or should they be on separate IB fabrics



___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Can one node mount more than one lustre cluster?

2011-03-15 Thread James Robnett

  I will add to the previous comment of yes you can, that you should
strongly consider naming them differently when they are built.

  Otherwise it's difficult to differentiate them on clients in
/proc/fs/lustre/llite/{fsname}-{string} where fsname is 'lustre' by
default.

  If they were both created with the default fsname then you'll
get two nearly indistinguishable directories.  Been there, done that,
would make a tshirt but nobody would get it.

James Robnett
NRAO/NM


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] quotacheck fails on filesystem with a permanently inactivated OST

2011-03-15 Thread Johann Lombardi
On Tue, Mar 15, 2011 at 01:26:44PM +0100, Johann Lombardi wrote:
 Arr, the fix works well with sparse OST indexes, but not with deactivated 
 OSTs. I'm sorry about that. I will have this fixed.

FYI, i have filed a bug for this issue:
http://jira.whamcloud.com/browse/LU-129

It should not take long to have a patch ready for testing.

Cheers,
Johann

-- 
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Problem with lustre 2.0.0.1, ext3/4 and big OSTs (8Tb)

2011-03-15 Thread Joan J. Piles
Hi,

We are trying to set up a lustre 2.0.0.1 (the most recent one 
downladable from the offiecial site) installation. We plan to have some 
big OSTs (~ 12Tb), using ScientificLinux 5.5 (which should be a RHEL 
clone for all purposes).

However, when we try to format the OSTs, we get the following error:

 [root@oss01 ~]# mkfs.lustre --ost --fsname=extra 
 --mgsnode=172.16.4.4@tcp0 --mkfsoptions '-i 262144 -E 
 stride=32,stripe_width=192 ' /dev/sde

Permanent disk data:
 Target: extra-OST
 Index:  unassigned
 Lustre FS:  extra
 Mount type: ldiskfs
 Flags:  0x72
   (OST needs_index first_time update )
 Persistent mount opts: errors=remount-ro,extents,mballoc
 Parameters: mgsnode=172.16.4.4@tcp

 checking for existing Lustre data: not found
 device size = 11427830MB
 formatting backing filesystem ldiskfs on /dev/sde
 target name  extra-OST
 4k blocks 2925524480
 options   -i 262144 -E stride=32,stripe_width=192  -J size=400 
 -I 256 -q -O dir_index,extents,uninit_bg -F
 mkfs_cmd = mke2fs -j -b 4096 -L extra-OST -i 262144 -E 
 stride=32,stripe_width=192  -J size=400 -I 256 -q -O 
 dir_index,extents,uninit_bg -F /dev/sde 2925524480
 mkfs.lustre: Unable to mount /dev/sde: Invalid argument

 mkfs.lustre FATAL: failed to write local files
 mkfs.lustre: exiting with 22 (Invalid argument)


In the dmesg log, we find the following line:

 LDISKFS-fs does not support filesystems greater than 8TB and can cause 
 data corruption.Use force_over_8tb mount option to override.

After some investigation, we find it is related to the use of ext3 
instead of ext4, even though we should be using ext4, proven by the fact 
that the file systems created are actually ext4:

 [root@oss01 ~]# file -s /dev/sde
 /dev/sde: Linux rev 1.0 ext4 filesystem data (extents) (large files)

Further, we made a test with an ext3 filesystem in the same machine, and 
the difference is found:

 [root@oss01 ~]# file -s /dev/sda1
 /dev/sda1: Linux rev 1.0 ext3 filesystem data (large files)

Everything we found in the net about this problem seems to refer to 
lustre 1.8.5. However, we would not expect such a regression in lustre 
2. Is this actually a problem with lustre 2? Has ext4 to be enabled 
either at compile time or with a parameter somewhere (we found no 
documentation about it)?

Greetings and thanks,


-- 
--
Joan Josep Piles Contreras -  Analista de sistemas
I3A - Instituto de Investigación en Ingeniería de Aragón
Tel: 976 76 10 00 (ext. 5454)
http://i3a.unizar.es -- jpi...@unizar.es
--

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Problem with lustre 2.0.0.1, ext3/4 and big OSTs (8Tb)

2011-03-15 Thread Kevin Van Maren
Joan J. Piles wrote:
 Hi,

 We are trying to set up a lustre 2.0.0.1 (the most recent one 
 downladable from the offiecial site) installation. We plan to have some 
 big OSTs (~ 12Tb), using ScientificLinux 5.5 (which should be a RHEL 
 clone for all purposes).

 However, when we try to format the OSTs, we get the following error:

   
 [root@oss01 ~]# mkfs.lustre --ost --fsname=extra 
 --mgsnode=172.16.4.4@tcp0 --mkfsoptions '-i 262144 -E 
 stride=32,stripe_width=192 ' /dev/sde

Permanent disk data:
 Target: extra-OST
 Index:  unassigned
 Lustre FS:  extra
 Mount type: ldiskfs
 Flags:  0x72
   (OST needs_index first_time update )
 Persistent mount opts: errors=remount-ro,extents,mballoc
 Parameters: mgsnode=172.16.4.4@tcp

 checking for existing Lustre data: not found
 device size = 11427830MB
 formatting backing filesystem ldiskfs on /dev/sde
 target name  extra-OST
 4k blocks 2925524480
 options   -i 262144 -E stride=32,stripe_width=192  -J size=400 
 -I 256 -q -O dir_index,extents,uninit_bg -F
 mkfs_cmd = mke2fs -j -b 4096 -L extra-OST -i 262144 -E 
 stride=32,stripe_width=192  -J size=400 -I 256 -q -O 
 dir_index,extents,uninit_bg -F /dev/sde 2925524480
 mkfs.lustre: Unable to mount /dev/sde: Invalid argument

 mkfs.lustre FATAL: failed to write local files
 mkfs.lustre: exiting with 22 (Invalid argument)
 


 In the dmesg log, we find the following line:

   
 LDISKFS-fs does not support filesystems greater than 8TB and can cause 
 data corruption.Use force_over_8tb mount option to override.
 

 After some investigation, we find it is related to the use of ext3 
 instead of ext4, 

Correct.

 even though we should be using ext4, proven by the fact 
 that the file systems created are actually ext4:

   
 [root@oss01 ~]# file -s /dev/sde
 /dev/sde: Linux rev 1.0 ext4 filesystem data (extents) (large files)
 

No, these are ldiskfs filesystems.  ext3+ldiskfs looks a bit like ext4 
(ext4 is largely based on the
enhancements done for Lustre's ldiskfs), but is not the same as 
ext4+ldiskfs.  In particular, file system
size is limited to 8TB, not 16TB.

 Further, we made a test with an ext3 filesystem in the same machine, and 
 the difference is found:

   
 [root@oss01 ~]# file -s /dev/sda1
 /dev/sda1: Linux rev 1.0 ext3 filesystem data (large files)
 

 Everything we found in the net about this problem seems to refer to 
 lustre 1.8.5. However, we would not expect such a regression in lustre 
 2. Is this actually a problem with lustre 2? Has ext4 to be enabled 
 either at compile time or with a parameter somewhere (we found no 
 documentation about it)?
   

Lustre 2.0 did not enable ext4 by default, due to known issues.  You can 
rebuild the Lustre server,
with --enable-ext4 on the configure line, to enable it.  But if you 
are going to use 12TB LUNs,
you should either sick with v1.8.5 (stable), or pull a newer version 
from git (experimental).

Kevin

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] need help

2011-03-15 Thread Kevin Van Maren
Ashok nulguda wrote:
 Dear All,

 How to forcefully shutdown the luster service from client and OST and 
 MDS server when IO are opening.

For the servers, you can just umount them.  There will not be any file 
system corruption, but files will not have the latest data -- the cache 
on the clients will not be written to disk (unless recovery happens -- 
restart the servers without having rebooted the clients).  In an 
emergency, this is normally all you have time to do before shutting down 
the system.

To unmount clients, not only can there not be any IO, you also need to 
first kill every process that has an open file on Lustre.  lsof can be 
useful here if you don't want to do a full shutdown, but in many 
environments killing non-system processes is enough.

Normally you'd want to shutdown all the clients, and then the servers.

Kevin

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] quotacheck fails on filesystem with a permanently inactivated OST

2011-03-15 Thread Samuel Aparicio
cheers.



On Mar 15, 2011, at 6:31 AM, Johann Lombardi wrote:

 On Tue, Mar 15, 2011 at 01:26:44PM +0100, Johann Lombardi wrote:
 Arr, the fix works well with sparse OST indexes, but not with deactivated 
 OSTs. I'm sorry about that. I will have this fixed.
 
 FYI, i have filed a bug for this issue:
 http://jira.whamcloud.com/browse/LU-129
 
 It should not take long to have a patch ready for testing.
 
 Cheers,
 Johann
 
 -- 
 Johann Lombardi
 Whamcloud, Inc.
 www.whamcloud.com

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Quotas question

2011-03-15 Thread Tien Nguyen
Hi,

I would like to ask your question about quotas:

  [root@gw13 ~]$ lfs quota -u todokoro /share1
  Disk quotas for user todokoro (uid 14268):
  Filesystem kbytes quota limit grace files quota limit grace
  /share1 0 0 1 - 1* 0 1 -
  
  As you can see, for files section, there is 1*, in this case, what
  does this number(1 in this case) mean?
  Is this file number or inode? or other meaningful number?

Thanks.

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] quotas question

2011-03-15 Thread Tien Nguyen
I would like to ask your question about quotas:

  [root@gw13 ~]$ lfs quota -u todokoro /share1
  Disk quotas for user todokoro (uid 14268):
  Filesystem kbytes quota limit grace files quota limit grace
  /share1 0 0 1 - 1* 0 1 -
  
  As you can see, for files section, there is 1*, in this case, what
  does this number(1 in this case) mean?
  Is this file number or inode? or other meaningful number?

Thanks.

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] quotas question

2011-03-15 Thread Johann Lombardi
Hi Tien,

On Tue, Mar 15, 2011 at 10:56:19AM -0700, Tien Nguyen wrote:
   Filesystem kbytes quota limit grace files quota limit grace
   /share10  0 1 - 1*0 1 -
   
   As you can see, for files section, there is 1*, in this case, what
   does this number(1 in this case) mean?
   Is this file number or inode? or other meaningful number?

That's the number of inodes owned by the user.
This user has an inode hard limit set to 1 and has thus already reached the 
quota limit.

Cheers,
Johann
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Lustre and system cpu usage

2011-03-15 Thread Andreas Dilger
On 2011-01-15, at 4:18 AM, Claudio Baeza Retamal wrote:
 El 14-03-2011 22:05, Andreas Dilger escribió:
 On 2011-01-14, at 3:57 PM, Claudio Baeza Retamal wrote:
 last month, I have configured lustre 1.8.5 over infiniband, before, I
 was using  Gluster 3.1.2, performance was ok but reliability was wrong,
 when 40 or more applications requested at the same time  for open a
 file, gluster servers  bounced randomly the active connections from
 clients. Lustre has not this problem, but I can see others issues, for
 example, namd appears with system cpu  around of 30%,  hpl benchmark
 appears between  70%-80% of system cpu, is too much high, with
 gluster, the system cpu was never exceeded 5%. I think, this is
 explained due gluster uses fuse and run in user space, but I am do not
 sure.
 I have some doubt:
 
 ¿why Lustre uses ipoib? Before, with gluster  I do not use ipoib, I am
 thinking  that ipoib module produces bad performance in infiniband and
 disturbs the infiniband native module.
 
 If you are using IPoIB for data then your LNET is configured incorrectly.  
 IPoIB is only needed for IB hostname resolution, and all LNET traffic can 
 use native IB with very low CPU overhead.  Your /etc/modprobe.conf and mount 
 lines should be using {addr}@o2ib0 instead of {addr} or {addr}@tcp0.
 
 For first two weeks, I was using options lnet networks=o2ib(ib0), now, I 
 am using options lnet networks=o2ib(ib0),tcp0(eth0) because I have a node 
 without HCA card, in both case, the system cpu usage is the same, the compute 
 node without infiniband is used to run matlab only.
 
 In the hpl benchmark case, my doubt is, why has a high system cpu usage?   Is 
 posible that LustreFS disturbs  mlx4 infiniband driver and causes problems 
 with  MPI?  hpl benchmark mainly does I/O for transport data over MPI, with 
 glusterFS system cpu was around 5%, instead, since  Lustre  was configured 
 system cpu is 70%-80% and we use  o2ib(ib0) for LNET in modprobe.conf.

Have you tried disabling the Lustre kernel debug logs (lctl set_param debug=0) 
and/or disabling the network data checksums (lctl set_param osc.*.checksums=0)?

Note that there is also CPU overhead in the kernel from copying data from 
userspace to the kernel that is unavoidable for any filesystem, unless O_DIRECT 
is used (which causes synchronous IO and has IO alignment restrictions).

 I have tried several options, following instruction from mellanox, in compute 
 nodes I disable irqbalance and run smp_affinity script, but system cpu still 
 so higher.
 Are there any tools to study lustre performance?
 
 It is posible to configure lustre to  transport metada over ethernet and
 data over infiniband?
 Yes, this should be possible, but putting the metadata on IB is much lower 
 latency and higher performance so you should really try to use IB for both.
 
 For namd and hpl benchmark, is  it normal to have system cpu to be so high?
 
 My configuration is the following:
 
 - Qlogic 12800-180 switch, 7 leaf (24 ports per  leaf) and 2 spines (All
 ports have QDR, 40 Gbps)
 - 66 HCA mellanox connectX, two ports, QDR 40 Gbps (compute nodes)
 - 1 metadata server, 96 GB RAM DDR3 optimized for performance, two Xeon
 5570, SAS 15K RPM  hard disk in Raid 1, HCA mellanox connectX with two ports
 - 4 OSS with 1 OST of 2 TB in RAID 5 each one (8 TB in total). The all
 OSS have a Mellanox ConnectX with two ports
 If you have IB on the MDS then you should definitely use {addr}@o2ib0 for 
 both OSS and MDS nodes.  That will give you much better metadata performance.
 
 Cheers, Andreas
 --
 Andreas Dilger
 Principal Engineer
 Whamcloud, Inc.
 
 
 
 
 
 
 regards
 
 claudio
 
 


Cheers, Andreas
--
Andreas Dilger 
Principal Engineer
Whamcloud, Inc.



___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Lustre and system cpu usage

2011-03-15 Thread Wojciech Turek
Hi Claudio,

When you say that during Linpack you see high system cpu usage, do you mean
the cpu usage on the clients or the servers?
Can you run for example top command and see which processes take the most of
the CPU time?

Cheers

Wojciech

On 15 January 2011 11:18, Claudio Baeza Retamal clau...@dim.uchile.clwrote:

 Hi,


 El 14-03-2011 22:05, Andreas Dilger escribió:
  On 2011-01-14, at 3:57 PM, Claudio Baeza Retamal wrote:
  last month, I have configured lustre 1.8.5 over infiniband, before, I
  was using  Gluster 3.1.2, performance was ok but reliability was wrong,
  when 40 or more applications requested at the same time  for open a
  file, gluster servers  bounced randomly the active connections from
  clients. Lustre has not this problem, but I can see others issues, for
  example, namd appears with system cpu  around of 30%,  hpl benchmark
  appears between  70%-80% of system cpu, is too much high, with
  gluster, the system cpu was never exceeded 5%. I think, this is
  explained due gluster uses fuse and run in user space, but I am do not
  sure.
  If Gluster is using FUSE, then all of the CPU usage would appear in
 user and not system.  That doesn't mean that the CPU usage is gone, just
 accounted in a different place.
 
 
  I have some doubt:
 
  ¿why Lustre uses ipoib? Before, with gluster  I do not use ipoib, I am
  thinking  that ipoib module produces bad performance in infiniband and
  disturbs the infiniband native module.
  If you are using IPoIB for data then your LNET is configured incorrectly.
  IPoIB is only needed for IB hostname resolution, and all LNET traffic can
 use native IB with very low CPU overhead.  Your /etc/modprobe.conf and mount
 lines should be using {addr}@o2ib0 instead of {addr} or {addr}@tcp0.
 

 For first two weeks, I was using options lnet networks=o2ib(ib0),
 now, I am using options lnet networks=o2ib(ib0),tcp0(eth0) because I
 have a node without HCA card, in both case, the system cpu usage is the
 same, the compute node without infiniband is used to run matlab only.

 In the hpl benchmark case, my doubt is, why has a high system cpu
 usage?   Is posible that LustreFS disturbs  mlx4 infiniband driver and
 causes problems with  MPI?  hpl benchmark mainly does I/O for transport
 data over MPI, with glusterFS system cpu was around 5%, instead, since
 Lustre  was configured system cpu is 70%-80% and we use  o2ib(ib0) for
 LNET in modprobe.conf .
 I have tried several options, following instruction from mellanox, in
 compute nodes I disable irqbalance and run smp_affinity script, but
 system cpu still so higher.
 Are there any tools to study lustre performance?

  It is posible to configure lustre to  transport metada over ethernet and
  data over infiniband?
  Yes, this should be possible, but putting the metadata on IB is much
 lower latency and higher performance so you should really try to use IB for
 both.
 
  For namd and hpl benchmark, is  it normal to have system cpu to be so
 high?
 
  My configuration is the following:
 
  - Qlogic 12800-180 switch, 7 leaf (24 ports per  leaf) and 2 spines (All
  ports have QDR, 40 Gbps)
  - 66 HCA mellanox connectX, two ports, QDR 40 Gbps (compute nodes)
  - 1 metadata server, 96 GB RAM DDR3 optimized for performance, two Xeon
  5570, SAS 15K RPM  hard disk in Raid 1, HCA mellanox connectX with two
 ports
  - 4 OSS with 1 OST of 2 TB in RAID 5 each one (8 TB in total). The all
  OSS have a Mellanox ConnectX with two ports
  If you have IB on the MDS then you should definitely use {addr}@o2ib0for 
  both OSS and MDS nodes.  That will give you much better metadata
 performance.
 
  Cheers, Andreas
  --
  Andreas Dilger
  Principal Engineer
  Whamcloud, Inc.
 
 
 
 
 

 regards

 claudio


 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Can one node mount more than one lustre cluster?

2011-03-15 Thread Malcolm Cowe
On 15/03/2011 17:15, Brian O'Connor wrote:
 Hi,

 What are the constraints on a client mounting more than one lustre
 file system?

 I realise that a lustre cluster can have more than one file system
 configured, but can a client mount different file systems from different
 lustre clusters on the same network?;ie.

 Assume a Single IB fabric and two Lustre clusters with separate
 MGS/MDS/OSS. One lustre is Lincoln and the other is Washington

   list_nids
 192.168.1.10@o2ib

   mount -t lustre lincoln:/lustre  /lincoln
   mount -t lustre washington:/lustre /washinton

 Is this doable or should they be on separate IB fabrics


 From my understanding, there should only be one MGS for the entire 
environment (although as many MDTs and OSTs as are required). This is 
because clients will only communicate with exactly one MGS (and will 
communicate with the MGS of the last FS mounted) and will only receive 
updates from the MGS with which it is registered. So, in the above 
example if there is a change to the lincoln file system (e.g. a 
failover event, some configuration changes), clients will not receive 
notification.

There's not an issue with having multiple MGS's on a site, only with 
mounting lustre file systems from multiple domains on the same client, 
IIRC.

Malcolm.

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Can one node mount more than one lustre cluster?

2011-03-15 Thread Andreas Dilger
On 2011-03-15, at 4:19 PM, Malcolm Cowe wrote:
 On 15/03/2011 17:15, Brian O'Connor wrote:
 What are the constraints on a client mounting more than one lustre
 file system?
 
 I realise that a lustre cluster can have more than one file system
 configured, but can a client mount different file systems from different
 lustre clusters on the same network?;ie.
 
 Assume a Single IB fabric and two Lustre clusters with separate
 MGS/MDS/OSS. One lustre is Lincoln and the other is Washington
 
 list_nids
 192.168.1.10@o2ib
 
 mount -t lustre lincoln:/lustre  /lincoln
 mount -t lustre washington:/lustre /washinton
 
 Is this doable or should they be on separate IB fabrics
 
 From my understanding, there should only be one MGS for the entire 
 environment (although as many MDTs and OSTs as are required). This is 
 because clients will only communicate with exactly one MGS (and will 
 communicate with the MGS of the last FS mounted) and will only receive 
 updates from the MGS with which it is registered. So, in the above 
 example if there is a change to the lincoln file system (e.g. a 
 failover event, some configuration changes), clients will not receive 
 notification.
 
 There's not an issue with having multiple MGS's on a site, only with 
 mounting lustre file systems from multiple domains on the same client, 
 IIRC.

That discussion was had on the list a few months ago, and the correct answer is 
that it should just work.  The only use last mounted MGS problem was fixed at 
some point, though I don't have the exact version handy.


Cheers, Andreas
--
Andreas Dilger 
Principal Engineer
Whamcloud, Inc.



___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Can one node mount more than one lustre cluster?

2011-03-15 Thread Michael Shuey
FYI, I'm using Lustre 1.8.5 to mount two filesystems from separate
domains (one in Lafayette, IN, and one in Bloomington, IN, run by two
different institutions) on 900+ nodes, and things just work.

--
Mike Shuey



On Tue, Mar 15, 2011 at 6:24 PM, Andreas Dilger adil...@whamcloud.com wrote:
 On 2011-03-15, at 4:19 PM, Malcolm Cowe wrote:
 On 15/03/2011 17:15, Brian O'Connor wrote:
 What are the constraints on a client mounting more than one lustre
 file system?

 I realise that a lustre cluster can have more than one file system
 configured, but can a client mount different file systems from different
 lustre clusters on the same network?;ie.

 Assume a Single IB fabric and two Lustre clusters with separate
 MGS/MDS/OSS. One lustre is Lincoln and the other is Washington

 list_nids
 192.168.1.10@o2ib

 mount -t lustre lincoln:/lustre  /lincoln
 mount -t lustre washington:/lustre /washinton

 Is this doable or should they be on separate IB fabrics

 From my understanding, there should only be one MGS for the entire
 environment (although as many MDTs and OSTs as are required). This is
 because clients will only communicate with exactly one MGS (and will
 communicate with the MGS of the last FS mounted) and will only receive
 updates from the MGS with which it is registered. So, in the above
 example if there is a change to the lincoln file system (e.g. a
 failover event, some configuration changes), clients will not receive
 notification.

 There's not an issue with having multiple MGS's on a site, only with
 mounting lustre file systems from multiple domains on the same client,
 IIRC.

 That discussion was had on the list a few months ago, and the correct answer 
 is that it should just work.  The only use last mounted MGS problem was 
 fixed at some point, though I don't have the exact version handy.


 Cheers, Andreas
 --
 Andreas Dilger
 Principal Engineer
 Whamcloud, Inc.



 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Can one node mount more than one lustre cluster?

2011-03-15 Thread Ashley Pittman

We have a few customers I can think of who do this, have numerous lustre 
filesystems each with their own MGS and have clients which mount more than one 
of them.

Ashley.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss