OSSs & MDS
==========
Lustre 2.5.3
CentOS 6.5 kernel 2.6.32-431.23.3.el6_lustre.x86_64
Clients
=======
Lustre 1.8.x and 2.5.3
We had to disable a crashed OST, so on our combined MDS/MGS we did a
lctl conf_param lundwork-OST000e.osc.active=0
and that seems to have worked fine.
We now have an issue where the file OST allocation algorithm doesn't seem to be
working anymore. This was working fine before the OST crash (using default
system striping parameters).
Now we notice that when we are moving (rsync'ing) large amounts of data to this
lustre filesystem, it only uses all the OSTs before the failed one (listed in
"lfs df -h"). So, we inactivated those OSTs once they started to get full. Now
it seems that the next OST in the list (after the failed one) is the only one
being hit, until we deactivate that one.
thanks
-k
_______________________________________________
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss