Hi folks,

Time to empty some OSTs to shut down some old arrays.  I've been following the 
docs from https://doc.lustre.org/lustre_manual.xhtml#lustremaint.remove_ost and 
am emptying with "lfs find /mnt/lustre/ -obd lustre-OST0060 | lfs_migrate -y" 
(for the various OSTs) and it's looking pretty good but I do have a few 
questions:

Q1) I've dealt with a few edge cases, missed files, etc and now "lfs find" and 
"rbh-find" both show that the OSTs have nothing left on them but they pretty 
much all have 236 inodes still allocated.  Is this just overhead? 

Q2) Also, one OST shows 237 inodes (lustre-OST0074_UUID shown below) but, 
again, "lfs find" says its empty.  Is that a concern?

Q3) Lastly, this file system is under load.  Am I safe to deactivate the OSTs 
while we're running or should I wait till our next maintenance outage?

For reference:
[root@hpcpbs02 ~]# lfs df -i |sed -e 's/qimrb/lustre/'
UUID                      Inodes       IUsed       IFree IUse% Mounted on
...
lustre-OST0060_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:96]
lustre-OST0061_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:97]
lustre-OST0062_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:98]
lustre-OST0063_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:99]
lustre-OST0064_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:100]
lustre-OST0065_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:101]
lustre-OST0066_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:102]
lustre-OST0067_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:103]
lustre-OST0068_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:104]
lustre-OST0069_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:105]
lustre-OST006a_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:106]
lustre-OST006b_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:107]
lustre-OST006c_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:108]
lustre-OST006d_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:109]
lustre-OST006e_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:110]
lustre-OST006f_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:111]
lustre-OST0070_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:112]
lustre-OST0071_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:113]
lustre-OST0072_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:114]
lustre-OST0073_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:115]
lustre-OST0074_UUID      61002112         237    61001875   1% 
/mnt/lustre[OST:116]
lustre-OST0075_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:117]
lustre-OST0076_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:118]
lustre-OST0077_UUID      61002112         236    61001876   1% 
/mnt/lustre[OST:119]
...

Cheers!
Scott
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to