Re: [lustre-discuss] ksym errors on kmod-lustre RPM after 2.12.0 build against MOFED 4.5-1

2019-10-15 Thread Stephane Thiell
Hi Americo,

In my experience, you need a proper kmod-mlnx-ofa_kernel RPM installed for the 
Lustre build process to find the correct symbols.

To generate the kmod-mlnx-ofa_kernel RPM for the current kernel (in my case, 
Lustre patched, server-side), you can try:

$ rpmbuild --rebuild --define 'KMP 1' 
mlnx-ofa_kernel-4.5-OFED.4.5.1.0.1.1.*.src.rpm

Then with this kmod-mlnx-ofa_kernel RPM installed, the Lustre build scripts 
should be able to resolve all ksyms properly.

Hope that helps,

Stephane

> On Oct 9, 2019, at 10:59 AM, Americo Ojeda  
> wrote:
> 
> Hello,
> 
> I tried to install lustre client rpms but i get the next error:
> 
> [user@AC922 lustre-release]$ rpm -ivh --test
> lustre-client-2.12.0-1.el7.ppc64le.rpm
> error: Failed dependencies:
> kmod-lustre-client = 2.12.0 is needed by
> lustre-client-2.12.0-1.el7.ppc64le
> [americo@SinergiAC922 lustre-release]$ rpm -ivh
> lustre-client-2.12.0-1.el7.ppc64le.rpm
> kmod-lustre-client-2.12.0-1.el7.ppc64le.rpm
> error: Failed dependencies:
> ksym(__ib_create_cq) = 0x413f2519 is needed by
> kmod-lustre-client-2.12.0-1.el7.ppc64le
> ksym(__rdma_accept) = 0xec53e047 is needed by
> kmod-lustre-client-2.12.0-1.el7.ppc64le
> ksym(__rdma_create_id) = 0x693e9921 is needed by
> kmod-lustre-client-2.12.0-1.el7.ppc64le
> ksym(backport_dependency_symbol) = 0xb43a926b is needed by
> kmod-lustre-client-2.12.0-1.el7.ppc64le
> ksym(ib_get_dma_mr) = 0xc9d102d7 is needed by
> kmod-lustre-client-2.12.0-1.el7.ppc64le
> 
> Server: IBM Power System9 AC922 (ppc64le)
> 
> OS. RHEL7.5 Alternate: (Linux SinergiAC922 4.14.0-49.13.1.el7a.ppc64le
> #1 SMP Mon Aug 27 07:37:11 EDT 2018 ppc64le ppc64le ppc64le GNU/Linux)
> 
> Mellanox: 4.5-1: (MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.5alternate-ppc64le.tgz)
> 
> Lustre: 2.12.0:
> 
> sh ./autogen.sh
> ./configure --disable-server --disable-ldiskfs --disable-tests
> --with-o2ib=/usr/src/ofa_kernel/default
> make rpms
> 
> some sugestion?
> 
> -- 
> Americo Ojeda 
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 8TiB LDISKFS MDT

2019-10-15 Thread Tamas Kazinczy

On 2019. 10. 15. 17:31, Mohr Jr, Richard Frank wrote:

On Oct 15, 2019, at 9:52 AM, Tamas Kazinczy  wrote:

With defaults (1024 for inode size and 2560 for inode ratio) I get only 4,8T 
usable space.

With those values, an inode is created for every 2560 bytes of MDT space.  
Since the inode is 1024 bytes, that leaves (2560 - 1024) = 1536 bytes of usable 
space out of every 2560 bytes (which is 60%).  So for an 8TB MDT, you get 8 * 
0.6 = 4.8 TB usable space.


Thank you for the clarification.



The choice you make will depend on how your MDT is used.  If you want to use 
the Data-on-MDT feature to store file data directly on the MDT, then you might 
want more usable space.  Keep in mind though that this will reduce the number 
of inodes you have, and if you run out of inodes, you cannot easily add more 
inodes to your MDT. (You would probably need to look in into adding another MDT 
in that case.)  Running out of inodes means that you can’t create new Lustre 
files even if you still have space left on the OSTs.  At the other end of the 
spectrum, if you think you will have lots of small files, then you could 
decrease the ratio to 2048 to get some more inodes.  If in doubt, I think the 
Lustre default values are pretty reasonable.


I am still not sure about whether I want to use Data-on-MDT.

We might have lots of small files but it is yet unclear how many.

To be safe I guess I shall rather go for more inodes (say with an inode 
ratio of 2304) than more space.



At LUG this year, I helped present a tutorial along with Dustin Leverman 
covering some sys admin aspects of Lustre.  One of the things I talked about 
was inode calculations.  It might have some useful info for you (slides are 
here: 
http://cdn.opensfs.org/wp-content/uploads/2019/07/LUG2019-Sysadmin-tutorial.pdf).


Thanks!


Cheers,

--
Tamás Kazinczy

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 8TiB LDISKFS MDT

2019-10-15 Thread Mohr Jr, Richard Frank


> On Oct 15, 2019, at 9:52 AM, Tamas Kazinczy  
> wrote:
> 
> With defaults (1024 for inode size and 2560 for inode ratio) I get only 4,8T 
> usable space.

With those values, an inode is created for every 2560 bytes of MDT space.  
Since the inode is 1024 bytes, that leaves (2560 - 1024) = 1536 bytes of usable 
space out of every 2560 bytes (which is 60%).  So for an 8TB MDT, you get 8 * 
0.6 = 4.8 TB usable space.

> Increasing inode ratio gives more usable space up to 7,9T at 65536.

Increasing the inode ratio will result in much fewer inodes being created, but 
more usable space.  Using a ratio of 65536 will make about 98% of your space 
usable, so for an 8TB MDT that corresponds to about 7.9 TB.

The choice you make will depend on how your MDT is used.  If you want to use 
the Data-on-MDT feature to store file data directly on the MDT, then you might 
want more usable space.  Keep in mind though that this will reduce the number 
of inodes you have, and if you run out of inodes, you cannot easily add more 
inodes to your MDT. (You would probably need to look in into adding another MDT 
in that case.)  Running out of inodes means that you can’t create new Lustre 
files even if you still have space left on the OSTs.  At the other end of the 
spectrum, if you think you will have lots of small files, then you could 
decrease the ratio to 2048 to get some more inodes.  If in doubt, I think the 
Lustre default values are pretty reasonable.

At LUG this year, I helped present a tutorial along with Dustin Leverman 
covering some sys admin aspects of Lustre.  One of the things I talked about 
was inode calculations.  It might have some useful info for you (slides are 
here: 
http://cdn.opensfs.org/wp-content/uploads/2019/07/LUG2019-Sysadmin-tutorial.pdf).

—
Rick Mohr
Senior HPC System Administrator
National Institute for Computational Sciences
University of Tennessee



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] OST Lustre Write Time

2019-10-15 Thread Jack Marquez
Hi,
I am really new at this.
I want to ask you if you can help me with this:
I have configured a Lustre server of 67 nodes. 1 client, 1 MGS, 1 MDT, and
64 OST. I am writing one file of 64GB from the client with the command dd
and I have put a stripe of 1GB.

How can I measure the write time that takes each OST to store 1GB?

At this moment I am measuring the time that takes write the file with the
'time' command.

Thank you for your help!!

--
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] 8TiB LDISKFS MDT

2019-10-15 Thread Tamas Kazinczy

Hi,

what is the proper way of creating an MDT with LDISKFS if my device is 8TiB?

I've already tried several combinations of inode size and inode ratio.

With defaults (1024 for inode size and 2560 for inode ratio) I get only 
4,8T usable space.


Increasing inode ratio gives more usable space up to 7,9T at 65536.

Is this OK? I am quite confused right now.


Thanks,

--
Tamás Kazinczy

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Is it a good practice to use big OST?

2019-10-15 Thread Harr, Cameron
We run one OST per OSS and each OST is ~580TB. Lustre 2.8 or 2.10, ZFS 0.7.

On 10/8/19 10:50 AM, Carlson, Timothy S wrote:
I’ve been running 100->200TB OSTs making up small petabyte file systems for the 
last 4 or 5 years with no pain.  Lustre 2.5.x through current generation.

Plenty of ZFS rebuilds when I ran across a set of bad disks that went fine.

From: lustre-discuss 

 On Behalf Of w...@umich.edu
Sent: Tuesday, October 8, 2019 10:43 AM
To: lustre-discuss 

Subject: [lustre-discuss] Is it a good practice to use big OST?

Hi All
We recently purchased new storage hardware, and that gives us the options of 
creating big zpools for OSTs (>100TB per OST),
I am wondering if anyone has any experience of using big OSTs and if that would 
impact the performance of lustre in a good or bad way?


Any comments or suggestions are appreciated!

Cheers!

-Wenjing
w...@umich.edu



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org