Re: [lustre-discuss] ksym errors on kmod-lustre RPM after 2.12.0 build against MOFED 4.5-1

2019-10-15 Thread Stephane Thiell
Hi Americo, In my experience, you need a proper kmod-mlnx-ofa_kernel RPM installed for the Lustre build process to find the correct symbols. To generate the kmod-mlnx-ofa_kernel RPM for the current kernel (in my case, Lustre patched, server-side), you can try: $ rpmbuild --rebuild --define

Re: [lustre-discuss] 8TiB LDISKFS MDT

2019-10-15 Thread Tamas Kazinczy
On 2019. 10. 15. 17:31, Mohr Jr, Richard Frank wrote: On Oct 15, 2019, at 9:52 AM, Tamas Kazinczy wrote: With defaults (1024 for inode size and 2560 for inode ratio) I get only 4,8T usable space. With those values, an inode is created for every 2560 bytes of MDT space. Since the inode is

Re: [lustre-discuss] 8TiB LDISKFS MDT

2019-10-15 Thread Mohr Jr, Richard Frank
> On Oct 15, 2019, at 9:52 AM, Tamas Kazinczy > wrote: > > With defaults (1024 for inode size and 2560 for inode ratio) I get only 4,8T > usable space. With those values, an inode is created for every 2560 bytes of MDT space. Since the inode is 1024 bytes, that leaves (2560 - 1024) = 1536

[lustre-discuss] OST Lustre Write Time

2019-10-15 Thread Jack Marquez
Hi, I am really new at this. I want to ask you if you can help me with this: I have configured a Lustre server of 67 nodes. 1 client, 1 MGS, 1 MDT, and 64 OST. I am writing one file of 64GB from the client with the command dd and I have put a stripe of 1GB. How can I measure the write time that

[lustre-discuss] 8TiB LDISKFS MDT

2019-10-15 Thread Tamas Kazinczy
Hi, what is the proper way of creating an MDT with LDISKFS if my device is 8TiB? I've already tried several combinations of inode size and inode ratio. With defaults (1024 for inode size and 2560 for inode ratio) I get only 4,8T usable space. Increasing inode ratio gives more usable space

Re: [lustre-discuss] Is it a good practice to use big OST?

2019-10-15 Thread Harr, Cameron
We run one OST per OSS and each OST is ~580TB. Lustre 2.8 or 2.10, ZFS 0.7. On 10/8/19 10:50 AM, Carlson, Timothy S wrote: I’ve been running 100->200TB OSTs making up small petabyte file systems for the last 4 or 5 years with no pain. Lustre 2.5.x through current generation. Plenty of ZFS