Im trying to determine the issue causing the below log entries on our
backup lustre fs. It seems to happen once or twice an hour during
rsync of another 20tb lustre fs. I dont see any errors on the 20tb
lustre fs. I've read that its not a good idea to run the MDT/MGS/OST/
ALL on the same server so
On Aug 18, 6:43 am, Andreas Dilger <[EMAIL PROTECTED]> wrote:
> On Aug 09, 2008 05:06 -0700, daledude wrote:
>
> > Is there is a tool that shows what files are being accessed? Sort of
> > like inotify, but not inotify? I'd like to compile file access
> > sta
Is there is a tool that shows what files are being accessed? Sort of
like inotify, but not inotify? I'd like to compile file access
statistics to try and balance the most accessed files across the OST's
better.
Thanks for any tips,
Dale
___
Lustre-discus
On Jul 9, 2:19 am, Andreas Dilger <[EMAIL PROTECTED]> wrote:
> If this is a branch-new installation (i.e. there isn't any data on the
> RAID that you want to use/keep) then you could run "llverdev" on the
> device to see if the device is working properly. A "partial" (-p) run
> is enough to do a q
> The problems was solved itself. Tomorrow I found the /fastfs lustre
> filesystem mounted everywhere without problems.
Do you know how the problem solved itself or did you fix something? Im
having a similar but I cant tell if its because the raid controller
(perc6/e) just cant get past 3.8tb o