> On Jul 27, 2018, at 1:56 PM, Andreas Dilger <adil...@whamcloud.com> wrote: > >> On Jul 27, 2018, at 10:24, Mohr Jr, Richard Frank (Rick Mohr) >> <rm...@utk.edu> wrote: >> >> I am working on upgrading some Lustre servers. The servers currently run >> lustre 2.8.0 with zfs 0.6.5, and I am looking to upgrade to lustre 2.10.4 >> with zfs 0.7.9. I was looking at the manual, and I did not see anything in >> there that mentioned special steps when changing ZFS versions. Do I need to >> run “zfs upgrade” on the underlying pools before upgrading the lustre >> version? > > Running "zfs upgrade" will (potentially) enable new features in the > filesystem, which means you cannot downgrade to ZFS 0.6.5 if you run into > problems. New features in ZFS 0.7.x related to Lustre include large dnodes, > inode quota accounting, and Multi-Modifier Protection (MMP). There are also > a lot of performance improvements, so it is definitely a good upgrade. > > I'd recommend doing the "zfs upgrade" step some time after the Lustre+ZFS > (and probably also kernel) software update. This gives you some time with > the new Lustre and ZFS versions to ensure they are working well in your > environment, then after this is done you can run the "zfs upgrade". This > also allows isolating any potential issues to the new ZFS features, instead > of lumping it all together with the software updates. > > You need to explictly enable the MMP support, by setting the pool property > "multihost=on", and inode quota with "feature@userobj_accounting=enabled" > when you are ready to use these features.
Thanks for all the info. One of my main reasons for asking about the zfs upgrade step was related to the feasibility of downgrading if we ran into problems. I will plan on postponing the zfs upgrade step until after we have been running the new lustre version for a while. BTW, I didn’t find anything in the lustre manual about setting multihost=on or enabling the userobj_accounting feature. Is that info documented elsewhere? Or is it going to be added to the manual in the future? -- Rick Mohr Senior HPC System Administrator National Institute for Computational Sciences http://www.nics.tennessee.edu _______________________________________________ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org