Jim, I'm forwarding this to lustre-discuss to get a broader community input.
I'm sure somebody has some experience with this.
Begin forwarded message:
>
> I am looking for information on how Lustre assigns and holds pages on client
> nodes across jobs. The motivation is that we want to make "
Easy way to reduce the client memory used by "Lustre" is to have an
Epilogue script run by SGE (or whatever scheduler/resource manager) that
does something like this on every node:
# sync ; sleep 1 ; sync
# echo 3 > /proc/sys/vm/drop_caches
Kevin
Nathan Rutman wrote:
> Jim, I'm forwarding this
On 2010-08-19, at 16:44, Kevin Van Maren wrote:
> Easy way to reduce the client memory used by "Lustre" is to have an
> Epilogue script run by SGE (or whatever scheduler/resource manager) that
> does something like this on every node:
> # sync ; sleep 1 ; sync
> # echo 3 > /proc/sys/vm/drop_cache
Hello!
On Aug 19, 2010, at 7:07 PM, Andreas Dilger wrote:
> If you want to flush all the memory used by a Lustre client between jobs, you
> can do "lctl set_param ldlm.namespaces.*.lru_size=clear". Unlike Kevin's
> suggestion it is Lustre-specific, while drop_caches will try to flush memory
>
Last week there was an article on lwn.net about "Transparent hugepages"
discussed during "The fourth Linux storage and filesystem summit". According
to that article, we might be luckily and those patches might go into RHEL6
If you do not have an lwn.net account you might need to wait a few weeks
Hi Andreas,
On 08/19/2010 06:07 PM, Andreas Dilger wrote:
> On 2010-08-19, at 16:44, Kevin Van Maren wrote:
>> Easy way to reduce the client memory used by "Lustre" is to have
>> an Epilogue script run by SGE (or whatever scheduler/resource
>> manager) that does something like this on every node:
On 08/19/2010 11:10 PM, Oleg Drokin wrote:
> Hello!
>
> On Aug 19, 2010, at 7:07 PM, Andreas Dilger wrote:
>> If you want to flush all the memory used by a Lustre client
>> between jobs, you can do "lctl set_param
>> ldlm.namespaces.*.lru_size=clear". Unlike Kevin's suggestion it is
>> Lustre-speci
Oleg,
Thanks for the clarification.
Jim Browne
At 11:10 PM 8/19/2010, Oleg Drokin wrote:
>Hello!
>
>On Aug 19, 2010, at 7:07 PM, Andreas Dilger wrote:
> > If you want to flush all the memory used by a Lustre client
> between jobs, you can do "lctl set_param
> ldlm.namespaces.*.lru_size=clear".
On 2010-08-20, at 07:21, John Hammond wrote:
> Indeed, thanks. On Ranger, the compute nodes use compact flash drives for /,
> and so they depend on tmpfs's for /tmp, /var/run, /var/log, and of course
> /dev/shm. So cleaning up these ram backed filesystems as much as practical
> before asking f