Hi Americo,
In my experience, you need a proper kmod-mlnx-ofa_kernel RPM installed for the
Lustre build process to find the correct symbols.
To generate the kmod-mlnx-ofa_kernel RPM for the current kernel (in my case,
Lustre patched, server-side), you can try:
$ rpmbuild --rebuild --define
On 2019. 10. 15. 17:31, Mohr Jr, Richard Frank wrote:
On Oct 15, 2019, at 9:52 AM, Tamas Kazinczy wrote:
With defaults (1024 for inode size and 2560 for inode ratio) I get only 4,8T
usable space.
With those values, an inode is created for every 2560 bytes of MDT space.
Since the inode is
> On Oct 15, 2019, at 9:52 AM, Tamas Kazinczy
> wrote:
>
> With defaults (1024 for inode size and 2560 for inode ratio) I get only 4,8T
> usable space.
With those values, an inode is created for every 2560 bytes of MDT space.
Since the inode is 1024 bytes, that leaves (2560 - 1024) = 1536
Hi,
I am really new at this.
I want to ask you if you can help me with this:
I have configured a Lustre server of 67 nodes. 1 client, 1 MGS, 1 MDT, and
64 OST. I am writing one file of 64GB from the client with the command dd
and I have put a stripe of 1GB.
How can I measure the write time that
Hi,
what is the proper way of creating an MDT with LDISKFS if my device is 8TiB?
I've already tried several combinations of inode size and inode ratio.
With defaults (1024 for inode size and 2560 for inode ratio) I get only
4,8T usable space.
Increasing inode ratio gives more usable space
We run one OST per OSS and each OST is ~580TB. Lustre 2.8 or 2.10, ZFS 0.7.
On 10/8/19 10:50 AM, Carlson, Timothy S wrote:
I’ve been running 100->200TB OSTs making up small petabyte file systems for the
last 4 or 5 years with no pain. Lustre 2.5.x through current generation.
Plenty of ZFS