Hi all,

I deployed Lustre on some legacy hardware and as a result my (4) OSS's each
have 32GB of RAM. Our workflow is such that we are frequently rereading the
same 15GB indexes over and over again from Lustre (they are striped across
all OSS's) by all nodes on our cluster. As such, is there any way to
increase the amount of memory that either Lustre or the Linux kernel uses to
cache files read from disk by the OSS's? This would allow much of the
indexes to be served from memory on the OSS's rather than disk.

I see a *lustre.memused_max = 48140176* parameter, but not sure what that
does. If it matters, my setup is such that each of the 4 OSS's serves 1 OST
that consists of a software RAID10 across 4 SATA disks internal to that OSS.

Any other suggestions for tuning for fast reads of large files would also be
greatly appreciated.

Thanks so much,
Jordan
_______________________________________________
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to