Sid,

Just saw Shane's message after I sent mine.  That link to ZFS snapshots might 
be more relevant than the LUG presentation I mentioned, but the concepts should 
be basically the same.

BTW, here the link to the lustre manual about changing a server NID: 
https://doc.lustre.org/lustre_manual.xhtml#lustremaint.changingservernid

That section (and the previous section on writeconf) should be helpful.

--Rick

On 10/23/25, 12:27 PM, "Mohr, Rick" <[email protected] <mailto:[email protected]>> 
wrote:


Sid,


If you are using ZFS and you can stand up the new servers ahead of time, I 
would consider using the zfs send/receive functionality to copy data from your 
old mgs/mds devices to the new ones. It works quite well, and you can even use 
it with snapshots while the current system is online. You'll just need to 
perform one final incremental send/receive after the file system in unmounted 
to make sure you have all the data. Then you can finish up with the writeconf.


This old LUG presentation might be useful: 
https://www.opensfs.org/wp-content/uploads/2017/06/Wed06-CroweTom-lug17-ost_data_migration_using_ZFS.pdf
 
<https://www.opensfs.org/wp-content/uploads/2017/06/Wed06-CroweTom-lug17-ost_data_migration_using_ZFS.pdf>


--Rick




On 10/23/25, 10:54 AM, "lustre-discuss on behalf of Andreas Dilger via 
lustre-discuss" <[email protected] 
<mailto:[email protected]> 
<mailto:[email protected] 
<mailto:[email protected]>> on behalf of 
[email protected] <mailto:[email protected]> 
<mailto:[email protected] 
<mailto:[email protected]>>> wrote:


This should be covered under "backup restore MDT" in the Lustre manual. Short 
answer is "tar --xattrs --include 'trusted.*' ...", and then run "writeconf" on 
all targets to regenerate the config with the new IP address.




On Oct 22, 2025, at 18:33, Sid Young via lustre-discuss 
<[email protected] <mailto:[email protected]> 
<mailto:[email protected] 
<mailto:[email protected]>>> wrote:




G'Day all,
I'm researching how to best move an MGS/MGT on ZFS on a Centos 7.9 platform 
(lustre 2.12.6) (old h/w and old storage) to a new server with Oracle linux 
8.10 and different storage (lustre 2.15.5).








The MGS box also has two MDS file systems, "mdt-home / fsname = home" and 
"mdt-lustre / fsname=lustre" also on ZFS. I will then plan to move (after a 
successful MGS migration), the MDS functionality to two new servers (one for 
/home and one for /lustre).








The MGS IP 10.140.93.42 needs to change to 93.50 and then the MDS will need a 
change later.








So far I can't work out the best way to achieve an MGS migration across 
platforms with an IP change. There are only 12 clients, so remounting 
filesystems is not an issue.








Does the OSS also need a config change when the MGS changes?








Some Info








[root@hpc-mds-02 ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mdthome 81.5G 4.12T 96K /mdthome
mdthome/home 77.6G 4.12T 77.6G /mdthome/home
mdtlustre 40.9G 5.00T 96K /mdtlustre
mdtlustre/lustre 37.1G 5.00T 37.1G /mdtlustre/lustre
mgspool 9.06M 860G 96K /mgspool
mgspool/mgt 8.02M 860G 8.02M /mgspool/mgt
[root@hpc-mds-02 ~]#




















[root@hpc-mds-02 ~]# lctl dl
0 UP osd-zfs MGS-osd MGS-osd_UUID 4
1 UP mgs MGS MGS 38
2 UP mgc MGC10.140.93.42@o2ib a4723a3a-dd8a-667f-0128-71caf5cc56be 4
3 UP osd-zfs home-MDT0000-osd home-MDT0000-osd_UUID 10
4 UP mgc MGC10.140.93.41@o2ib 68dff2a2-29d9-1468-6ff0-6d99fa57d383 4
5 UP mds MDS MDS_uuid 2
6 UP lod home-MDT0000-mdtlov home-MDT0000-mdtlov_UUID 3
7 UP mdt home-MDT0000 home-MDT0000_UUID 40
8 UP mdd home-MDD0000 home-MDD0000_UUID 3
9 UP qmt home-QMT0000 home-QMT0000_UUID 3
10 UP osp home-OST0000-osc-MDT0000 home-MDT0000-mdtlov_UUID 4
11 UP osp home-OST0001-osc-MDT0000 home-MDT0000-mdtlov_UUID 4
12 UP osp home-OST0002-osc-MDT0000 home-MDT0000-mdtlov_UUID 4
13 UP osp home-OST0003-osc-MDT0000 home-MDT0000-mdtlov_UUID 4
14 UP lwp home-MDT0000-lwp-MDT0000 home-MDT0000-lwp-MDT0000_UUID 4
15 UP osd-zfs lustre-MDT0000-osd lustre-MDT0000-osd_UUID 12
16 UP lod lustre-MDT0000-mdtlov lustre-MDT0000-mdtlov_UUID 3
17 UP mdt lustre-MDT0000 lustre-MDT0000_UUID 44
18 UP mdd lustre-MDD0000 lustre-MDD0000_UUID 3
19 UP qmt lustre-QMT0000 lustre-QMT0000_UUID 3
20 UP osp lustre-OST0000-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
21 UP osp lustre-OST0001-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
22 UP osp lustre-OST0002-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
23 UP osp lustre-OST0003-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
24 UP osp lustre-OST0004-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
25 UP osp lustre-OST0005-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4
26 UP lwp lustre-MDT0000-lwp-MDT0000 lustre-MDT0000-lwp-MDT0000_UUID 4
[root@hpc-mds-02 ~]#




















[root@hpc-mds-02 ~]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
mdthome 4.34T 81.5G 4.26T - 49% 1% 1.00x ONLINE -
mdtlustre 5.20T 40.9G 5.16T - 47% 0% 1.00x ONLINE -
mgspool 888G 9.12M 888G - 0% 0% 1.00x ONLINE -
[root@hpc-mds-02 ~]#












pool: mgspool
state: ONLINE
scan: scrub repaired 0B in 0h0m with 0 errors on Mon Jun 17 13:18:44 2024
config:




NAME STATE READ WRITE CKSUM
mgspool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
d3710M0 ONLINE 0 0 0
d3710M1 ONLINE 0 0 0




errors: No known data errors
[root@hpc-mds-02 ~]#












Sid Young
Brisbane, Australia
















































_______________________________________________
lustre-discuss mailing list
[email protected] <mailto:[email protected]> 
<mailto:[email protected] 
<mailto:[email protected]>>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org 
<http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org> 
<http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org> 
<http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org;> 
<https://urldefense.us/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddiscuss-2Dlustre.org&amp;d=DwQGaQ&amp;c=v4IIwRuZAmwupIjowmMWUmLasxPEgYsgNI-O7C4ViYc&amp;r=SpEwA4Pnyq7nH7aMGq8KpA&amp;m=n3dTv_sqmliqozKnPeWUPYiCu7pXY5XA_YMq_V3XwnQz2oV-P7G-LQKXb3D4AQeQ&amp;s=mm7mXfEsz_nXWVLy-woc13MFIi0QdlKi-qkGEm8HdKg&amp;e=>
 
<https://urldefense.us/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddiscuss-2Dlustre.org&amp;amp;d=DwQGaQ&amp;amp;c=v4IIwRuZAmwupIjowmMWUmLasxPEgYsgNI-O7C4ViYc&amp;amp;r=SpEwA4Pnyq7nH7aMGq8KpA&amp;amp;m=n3dTv_sqmliqozKnPeWUPYiCu7pXY5XA_YMq_V3XwnQz2oV-P7G-LQKXb3D4AQeQ&amp;amp;s=mm7mXfEsz_nXWVLy-woc13MFIi0QdlKi-qkGEm8HdKg&amp;amp;e=&gt;>
 
<https://urldefense.us/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddiscuss-2Dlustre.org&amp;amp;d=DwQGaQ&amp;amp;c=v4IIwRuZAmwupIjowmMWUmLasxPEgYsgNI-O7C4ViYc&amp;amp;r=SpEwA4Pnyq7nH7aMGq8KpA&amp;amp;m=n3dTv_sqmliqozKnPeWUPYiCu7pXY5XA_YMq_V3XwnQz2oV-P7G-LQKXb3D4AQeQ&amp;amp;s=mm7mXfEsz_nXWVLy-woc13MFIi0QdlKi-qkGEm8HdKg&amp;amp;e=&gt;>
 
<https://urldefense.us/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddiscuss-2Dlustre.org&amp;amp;amp;d=DwQGaQ&amp;amp;amp;c=v4IIwRuZAmwupIjowmMWUmLasxPEgYsgNI-O7C4ViYc&amp;amp;amp;r=SpEwA4Pnyq7nH7aMGq8KpA&amp;amp;amp;m=n3dTv_sqmliqozKnPeWUPYiCu7pXY5XA_YMq_V3XwnQz2oV-P7G-LQKXb3D4AQeQ&amp;amp;amp;s=mm7mXfEsz_nXWVLy-woc13MFIi0QdlKi-qkGEm8HdKg&amp;amp;amp;e=&amp;gt;&gt;>












Cheers, Andreas
—
Andreas Dilger
Lustre Principal Architect
Whamcloud/DDN





































_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to