Thanks. Here is what I get:
[root@holylfs02oss06 ~]# tunefs.lustre --dryrun /dev/mapper/mpathd
Failed to initialize ZFS library: 256
checking for existing Lustre data: found
Read previous values:
Target: holylfs2-OST001f
Index: unassigned
Lustre FS:
Mount type: ldiskfs
Flags: 0x50
(needs_index update )
Persistent mount opts:
Parameters:
tunefs.lustre FATAL: must set target type: MDT,OST,MGS
tunefs.lustre: exiting with 22 (Invalid argument)
-Paul Edmon-
On 7/22/2022 10:37 AM, Thomas Roth via lustre-discuss wrote:
You could look at what the device believes it's formatted with by
> tunefs.lustre --dryrun /dev/mapper/mpathd
When I do that here, I get something like
checking for existing Lustre data: found
Read previous values:
Target: idril-OST000e
Index: 14
Lustre FS: idril
Mount type: zfs
Flags: 0x2
(OST )
Persistent mount opts:
Parameters: mgsnode=10.20.6.64@o2ib4:10.20.6.69@o2ib4
...
Tells you about 'mount type' and 'mgsnode'.
Regards
Thomas
On 20/07/2022 19.48, Paul Edmon via lustre-discuss wrote:
We have a filesystem that we have running Lustre 2.10.4 in HA mode
using IML. One of our OST's had some disk failures and after
reconstruction of the RAID set it won't remount but gives:
[root@holylfs02oss06 ~]# mount -t lustre /dev/mapper/mpathd
/mnt/holylfs2-OST001f
Failed to initialize ZFS library: 256
mount.lustre: missing option mgsnode=<nid>
The weird thing is that we didn't build this with ZFS, the devices
are all ldiskfs. We suspect some of the data is corrupt on the disk
but we were wondering if anyone had seen this error before and if
there was a solution.
-Paul Edmon-
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org