Martin, Have you tried doing a writeconf on everything to regenerate the config logs? If the fs is not in use atm, this might be a good time to try it.
--Rick On 8/14/25, 3:11 AM, "lustre-discuss on behalf of BALVERS Martin via lustre-discuss" <[email protected] <mailto:[email protected]> on behalf of [email protected] <mailto:[email protected]>> wrote: Hi Thomas, Thanks for the reply. Here is the output: [root@mds ~]# lctl get_param osp.lustre-OST0003-osc-MDT0000.active osp.lustre-OST0003-osc-MDT0000.active=1 The problem is caused by the fact that I replaced a previously added OST using the --replace option in the mkfs.lustre command. OST0004 was added correctly in one go, and that one is visible to the client, so I don't think it is a client problem. The client can reach the OSS that has OST0003. It seems that the according to the MDS everything is fine. It is only the 'lfs df' command that shows a problem. Regular 'df' shows all the space of the lustre fs, but 'lfs df' does not. # df -h Filesystem Size Used Avail Use% Mounted on 192.168.6.1@tcp:/lustre 487T 223T 264T 46% /lustre # lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 11.4T 2.8T 8.6T 25% /lustre[MDT:0] lustre-OST0000_UUID 97.2T 74.4T 22.8T 77% /lustre[OST:0] lustre-OST0001_UUID 97.2T 74.0T 23.2T 77% /lustre[OST:1] lustre-OST0002_UUID 97.2T 74.1T 23.1T 77% /lustre[OST:2] OST0003 : Invalid argument lustre-OST0004_UUID 97.2T 7.0M 97.2T 1% /lustre[OST:4] filesystem_summary: 388.9T 222.5T 166.4T 58% /lustre The fs is not in use at the moment, no data has been written to it since adding OST0003 an 4. If there is a way to fix this removing and adding OST0003 again then I can try that. As long as the data that is on the other OST's is not lost. Regards, Martin Balvers _______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
