Thank you for your explanation. I think I understand what you mean. I will
test in a small cluster and measure the number of files/locks.
Andreas Dilger 于2024年3月11日周一 17:34写道:
> All of the numbers in this example are estimates/approximations to give an
> idea about the amount of memory that the
All of the numbers in this example are estimates/approximations to give an idea
about the amount of memory that the MDS may need under normal operating
circumstances. However, the MDS will also continue to function with more or
less memory. The actual amount of memory in use will change very
Hi, Andreas.
Thank you for your reply.
Can I consider 256 files per core as an empirical parameter? And does the
parameter '256' need testing based on hardware conditions? Additionally, in
the calculation formula "12 interactive clients * 100,000 files * 2KB =
2400 MB," is the number '100,000'
These numbers are just estimates, you can use values more suitable to your
workload.
Similarly, 32-core clients may be on the low side these days. NVIDIA DGX nodes
have 256 cores, though you may not have 1024 of them.
The net answer is that having 64GB+ of RAM is inexpensive these days and
In the Lustre Manual 5.5.2.1 section, the examples mentioned:
*For example, for a single MDT on an MDS with 1,024 compute nodes, 12
interactive login nodes, and a*
*20 million file working set (of which 9 million files are cached on the
clients at one time):*
*Operating system overhead = 4096 MB