Amit, Are you by any chance specifying a unique value for --fsname for each of the MDT/OSTs? If so, keep in mind that fsname needs to be the same when you format all the MDT/OST devices and that the fsname doesn't have anything to do with the mount point for the MDT/OSTs.
--Rick On Jan 8, 2015, at 11:52 AM, "Gupta, Amit" <amit.gu...@optum.com> wrote: > Joe, I agree and I have it configured the same way as you described.. Both > the MGS and MDS are on the same node. Adding the output .. > > What am I missing here ?. Thanks > > > Here’s what I had used on the MDS/MGS node to configure the lustre fs > > mkfs.lustre --fsname=temp --mgs --mdt --index=0 /dev/mapper/mpathcp1 > > df- k output on MGS/MDS > /dev/mapper/mpathcp1 201286044 4273344 183232496 3% /mdt > > > [root@mgs/mds ~]# cat /proc/fs/lustre/mgs/MGS/live/* > fsname: client1 > flags: 0x20 gen: 5 > client1-OST0000 > > Secure RPC Config Rules: > > imperative_recovery_state: > state: full > nonir_clients: 0 > nidtbl_version: 4 > notify_duration_total: 0.000000 > notify_duation_max: 0.000000 > notify_count: 2 > fsname: params > flags: 0x21 gen: 1 > > Secure RPC Config Rules: > > imperative_recovery_state: > state: full > nonir_clients: 0 > nidtbl_version: 2 > notify_duration_total: 0.000000 > notify_duation_max: 0.000000 > notify_count: 0 > fsname: temp > flags: 0x20 gen: 7 > temp-MDT0000 > > Secure RPC Config Rules: > > imperative_recovery_state: > state: full > nonir_clients: 0 > nidtbl_version: 4 > notify_duration_total: 0.000000 > notify_duation_max: 0.000000 > notify_count: 1 > > > [root@mgs/mds] debugfs -c -R 'ls -l CONFIGS' /dev/mapper/mpathcp1 > debugfs 1.42.12.wc1 (15-Sep-2014) > /dev/mapper/mpathcp1: catastrophic mode - not reading inode or group bitmaps - > 32769 40755 (2) 0 0 4096 6-Jan-2015 13:09 . > 2 40755 (2) 0 0 4096 6-Jan-2015 13:10 .. > 12 100644 (1) 0 0 12288 6-Jan-2015 13:10 mountdata > 81 100644 (1) 0 0 0 6-Jan-2015 13:10 params-client > 82 100644 (1) 0 0 0 6-Jan-2015 13:10 params > 83 100644 (1) 0 0 11200 6-Jan-2015 13:10 temp-client > 84 100644 (1) 0 0 9792 6-Jan-2015 13:10 temp-MDT0000 > 104 100644 (1) 0 0 10600 6-Jan-2015 13:12 client2-client > 105 100644 (1) 0 0 8880 6-Jan-2015 13:12 client2-OST0000 > 107 100644 (1) 0 0 10600 6-Jan-2015 23:38 client1-client > 108 100644 (1) 0 0 8880 6-Jan-2015 23:38 client1-OST0000 > > > From: Mervini, Joseph A [mailto:jame...@sandia.gov] > Sent: Thursday, January 08, 2015 10:43 AM > To: Gupta, Amit > Cc: 한종우; lustre discuss > Subject: Re: [EXTERNAL] Re: [Lustre-discuss] Newbie - Unable to mount the OST > on the Client > > In lustre you MUST have a MGS _AND_ MDS node and an associated MDT (file > system) running on the MDS. Typically the MDS and MGS are configured on the > same node. > > If you don't have this, lustre won't work. > ==== > > Joe Mervini > Sandia National Laboratories > High Performance Computing > 505.844.6770 > jame...@sandia.gov > > > > On Jan 8, 2015, at 7:23 AM, Gupta, Amit <amit.gu...@optum.com> wrote: > > > Jongwoo, > > This is the mount command that is being used on the client machine > [root@Client ~]# mount -t lustre 10.177.33.10@tcp:/client1 /mnt/lustre > > Here 10.177.33.10 is the MGS Server and “client1” is the filesystem name for > the OST > > On the OSS, here is this filesystem mounted as > > /dev/zd912 1069706420 483556 1013552180 1% /client1 > The commands used on OST to mount and format “client1” were > > mkfs.lustre --ost --fsname=client1 --reformat --index=0 > --mgsnode=10.177.33.10@tcp0 /dev/zvol/LSD1Pool/LusterVola0 > mount -t lustre /dev/zvol/LSD1Pool/LusterVola0 /client1 > > > > From: 한종우 [mailto:jw....@apexcns.com] > Sent: Wednesday, January 07, 2015 9:20 PM > To: Gupta, Amit > Cc: lustre-discuss@lists.lustre.org > Subject: Re: [Lustre-discuss] Newbie - Unable to mount the OST on the Client > > Hi Gupta, > > What I see is > > • there is no MDS and/or filesystem name > • mount command did not specify filesystem name, otherwise the > filesystem name is "client1" > try preparing MDT volume and format it with a filesystem name( on the MGS is > okay) and mount the MDT, and retry lustre mount with following command > > [root@Client ~]# mount -t lustre 10.177.33.10@tcp:/<FILESYSTEMNAME> > /mnt/lustre > > Regards, > > Jongwoo Han > > > 2015-01-07 22:25 GMT+09:00 Gupta, Amit <amit.gu...@optum.com>: > Hi All , > > Need some guidance on getting past this error that I get mounting the OST on > the Client. This is the first time I am trying to mount it so not sure what I > could be missing.. I will be glad to provide any other logs that would be > helpful to determine the root cause of this issue.. > > Thanks > > There are 3 servers in the configuration running Lustre > kernel-2.6.32-431.20.3.el6_lustre.x86_64 > > 10.177.33.10 is MGS/MGT Server > 10.177.33.9 is the Lustre Client > 10.177.33.22 is OSS > > > [root@Client ~]# mount -t lustre 10.177.33.10@tcp:/client1 /mnt/lustre > mount.lustre: mount 10.177.33.10@tcp:/client1 at /mnt/lustre failed: Invalid > argument > This may have multiple causes. > Is 'client1' the correct filesystem name? > Are the mount options correct? > Check the syslog for more info > > > The OST is mounted on OSS as Client1 > > > > > Error from Logs > > Jan 7 13:10:46 client kernel: LustreError: 156-2: The client profile > 'client1-client' could not be read from the MGS. Does that > filesystem exist? > Jan 7 13:10:46 client kernel: LustreError: > 11278:0:(lov_obd.c:951:lov_cleanup()) client1-clilov-ffff8820337cec00: lov > tgt 0 not > cleaned! deathrow=0, lovrc=1 > Jan 7 13:10:46 client kernel: Lustre: Unmounted client1-client > Jan 7 13:10:46 client kernel: LustreError: > 13167:0:(obd_mount.c:1342:lustre_fill_super()) Unable to mount (-22) > > ================================================ > > Pinging MGS and Client from OSS > ================================================= > > [root@OSS ]# lctl ping 10.177.33.10 > 12345-0@lo > 12345-10.177.33.10@tcp > [root@OSS]# lctl ping 10.177.33.9 > 12345-0@lo > 12345-10.177.33.9@tcp > > ===================== > OST mounted as Client1 on OSS > ============================= > > /dev/zd912 1069706420 483556 1013552180 1% /client1 > > > [root@OSS]# tunefs.lustre /dev/zd912 > checking for existing Lustre data: found > Reading CONFIGS/mountdata > > Read previous values: > Target: client1-OST0000 > Index: 0 > Lustre FS: client1 > Mount type: ldiskfs > Flags: 0x2 > (OST ) > Persistent mount opts: errors=remount-ro > Parameters: mgsnode=10.177.33.10@tcp > > > Permanent disk data: > Target: client1-OST0000 > Index: 0 > Lustre FS: client1 > Mount type: ldiskfs > Flags: 0x2 > (OST ) > Persistent mount opts: errors=remount-ro > Parameters: mgsnode=10.177.33.10@tcp > > > lctl > modules > add-symbol-file lustre/osp/osp.o 0xffffffffa10f5000 > add-symbol-file lustre/mdd/mdd.o 0xffffffffa0b12000 > add-symbol-file lustre/lod/lod.o 0xffffffffa109f000 > add-symbol-file lustre/mdt/mdt.o 0xffffffffa0fe7000 > add-symbol-file lustre/mgs/mgs.o 0xffffffffa08c9000 > add-symbol-file lustre/lfsck/lfsck.o 0xffffffffa0f17000 > add-symbol-file lustre/ost/ost.o 0xffffffffa018c000 > add-symbol-file lustre/mgc/mgc.o 0xffffffffa0ef9000 > add-symbol-file lustre/quota/lquota.o 0xffffffffa0dfc000 > add-symbol-file lustre/llite/lustre.o 0xffffffffa0ce2000 > add-symbol-file lustre/lov/lov.o 0xffffffffa0c66000 > add-symbol-file lustre/mdc/mdc.o 0xffffffffa0c25000 > add-symbol-file lustre/fid/fid.o 0xffffffffa0c07000 > add-symbol-file lustre/lmv/lmv.o 0xffffffffa0bbc000 > add-symbol-file lustre/fld/fld.o 0xffffffffa0b9d000 > add-symbol-file lnet/klnds/socklnd/ksocklnd.o 0xffffffffa0b64000 > add-symbol-file lustre/ptlrpc/ptlrpc.o 0xffffffffa0923000 > add-symbol-file lustre/obdclass/obdclass.o 0xffffffffa06e4000 > add-symbol-file lnet/lnet/lnet.o 0xffffffffa067b000 > add-symbol-file libcfs/libcfs/libcfs.o 0xffffffffa05e1000 > add-symbol-file ldiskfs/ldiskfs.o 0xffffffffa055d000 > lctl > lustre_build_version > Lustre version: 2.6.0-RC2--PRISTINE-2.6.32-431.20.3.el6_lustre.x86_64 > lctl version: 2.6.0-RC2--PRISTINE-2.6.32-431.20.3.el6_lustre.x86_64 > > > > > ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]] > ================================================== > > On MGS Server > ==================================================== > > [root@MGS ~]# lctl ping 10.177.33.9@tcp > 12345-0@lo > 12345-10.177.33.9@tcp > [root@MGS ~]# lctl ping 10.177.33.22@tcp > 12345-0@lo > 12345-10.177.33.22@tcp > [root@MGS ~]# cat /proc/fs/lustre/mgs/MGS/live/* > fsname: client1 > flags: 0x20 gen: 5 > client1-OST0000 > > Secure RPC Config Rules: > > imperative_recovery_state: > state: full > nonir_clients: 0 > nidtbl_version: 3 > notify_duration_total: 0.000000 > notify_duation_max: 0.000000 > notify_count: 1 > fsname: params > flags: 0x21 gen: 1 > > Secure RPC Config Rules: > > imperative_recovery_state: > state: full > nonir_clients: 0 > nidtbl_version: 2 > notify_duration_total: 0.000000 > notify_duation_max: 0.000000 > notify_count: 0 > fsname: temp > flags: 0x20 gen: 7 > temp-MDT0000 > > Secure RPC Config Rules: > > imperative_recovery_state: > state: full > nonir_clients: 0 > nidtbl_version: 4 > notify_duration_total: 0.000000 > notify_duation_max: 0.000000 > notify_count: 1 > > > lctl > lustre_build_version > Lustre version: 2.6.0-RC2--PRISTINE-2.6.32-431.20.3.el6_lustre.x86_64 > lctl version: 2.6.0-RC2--PRISTINE-2.6.32-431.20.3.el6_lustre.x86_64 > lctl > modules > add-symbol-file lustre/osp/osp.o 0xffffffffa10bc000 > add-symbol-file lustre/mdd/mdd.o 0xffffffffa1057000 > add-symbol-file lustre/lod/lod.o 0xffffffffa0fed000 > add-symbol-file lustre/mdt/mdt.o 0xffffffffa0f0d000 > add-symbol-file lustre/lfsck/lfsck.o 0xffffffffa0e97000 > add-symbol-file lustre/mgs/mgs.o 0xffffffffa0e3c000 > add-symbol-file lustre/mgc/mgc.o 0xffffffffa0e10000 > add-symbol-file lustre/quota/lquota.o 0xffffffffa0d13000 > add-symbol-file ldiskfs/ldiskfs.o 0xffffffffa0c8f000 > add-symbol-file lustre/llite/lustre.o 0xffffffffa0b75000 > add-symbol-file lustre/lov/lov.o 0xffffffffa0af9000 > add-symbol-file lustre/mdc/mdc.o 0xffffffffa0ab8000 > add-symbol-file lustre/fid/fid.o 0xffffffffa0a9a000 > add-symbol-file lustre/lmv/lmv.o 0xffffffffa0a4f000 > add-symbol-file lustre/fld/fld.o 0xffffffffa0a38000 > add-symbol-file lnet/klnds/socklnd/ksocklnd.o 0xffffffffa09ff000 > add-symbol-file lustre/ptlrpc/ptlrpc.o 0xffffffffa07be000 > add-symbol-file lustre/obdclass/obdclass.o 0xffffffffa057f000 > add-symbol-file lnet/lnet/lnet.o 0xffffffffa0516000 > add-symbol-file libcfs/libcfs/libcfs.o 0xffffffffa047c000 > > > > ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]] > > ==================================================== > Trying to Mount OST on the Client > ===================================================== > > [root@Client ~]# mount -t lustre 10.177.33.10@tcp:/client1 /mnt/lustre > mount.lustre: mount 10.177.33.10@tcp:/client1 at /mnt/lustre failed: Invalid > argument > This may have multiple causes. > Is 'client1' the correct filesystem name? > Are the mount options correct? > Check the syslog for more info. > > [root@client ~]# lctl ping 10.177.33.10@tcp > 12345-0@lo > 12345-10.177.33.10@tcp > [root@client ~]# lctl ping 10.177.33.22@tcp > 12345-0@lo > 12345-10.177.33.22@tcp > > > [root@client ~]# lctl > lctl > lustre_build_version > Lustre version: 2.6.0-RC2--PRISTINE-2.6.32-431.20.3.el6.x86_64 > lctl version: 2.6.0-RC2--PRISTINE-2.6.32-431.20.3.el6.x86_64 > lctl > modules > add-symbol-file lustre/osc/osc.o 0xffffffffa1212000 > add-symbol-file lustre/mgc/mgc.o 0xffffffffa11f4000 > add-symbol-file lustre/llite/lustre.o 0xffffffffa10da000 > add-symbol-file lustre/lov/lov.o 0xffffffffa105e000 > add-symbol-file lustre/mdc/mdc.o 0xffffffffa101d000 > add-symbol-file lustre/fid/fid.o 0xffffffffa100f000 > add-symbol-file lustre/lmv/lmv.o 0xffffffffa0fc4000 > add-symbol-file lustre/fld/fld.o 0xffffffffa0fb3000 > add-symbol-file lustre/ptlrpc/ptlrpc.o 0xffffffffa0e0b000 > add-symbol-file lustre/obdclass/obdclass.o 0xffffffffa0bff000 > add-symbol-file lnet/klnds/socklnd/ksocklnd.o 0xffffffffa08f8000 > add-symbol-file lnet/lnet/lnet.o 0xffffffffa04db000 > add-symbol-file libcfs/libcfs/libcfs.o 0xffffffffa0441000 > > This e-mail, including attachments, may include confidential and/or > proprietary information, and may be used only by the person or entity > to which it is addressed. If the reader of this e-mail is not the intended > recipient or his or her authorized agent, the reader is hereby notified > that any dissemination, distribution or copying of this e-mail is > prohibited. If you have received this e-mail in error, please notify the > sender by replying to this message and delete this e-mail immediately. > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss > > > > > -- > Jongwoo Han > Principal consultant > jw....@apexcns.com > Tel: +82- 2-3413-1704 > Mobile:+82-505-227-6108 > Fax: : +82- 2-544-7962 > > This e-mail, including attachments, may include confidential and/or > proprietary information, and may be used only by the person or entity > to which it is addressed. If the reader of this e-mail is not the intended > recipient or his or her authorized agent, the reader is hereby notified > that any dissemination, distribution or copying of this e-mail is > prohibited. If you have received this e-mail in error, please notify the > sender by replying to this message and delete this e-mail immediately. > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss > > > This e-mail, including attachments, may include confidential and/or > proprietary information, and may be used only by the person or entity > to which it is addressed. If the reader of this e-mail is not the intended > recipient or his or her authorized agent, the reader is hereby notified > that any dissemination, distribution or copying of this e-mail is > prohibited. If you have received this e-mail in error, please notify the > sender by replying to this message and delete this e-mail immediately. > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss _______________________________________________ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss