Re: [lustre-discuss] High MDS load

2020-05-28 Thread Carlson, Timothy S
Since some mailers don't like attachments, I'll just paste in the script we use 
here.  

I call the script with

./parse.sh | sort -k3 -n

You just need to change out the name of your MDT in two places.

#!/bin/bash
set -e
SLEEP=10
stats_clear()
{
cd $1
echo clear >clear
}

stats_print()
{
cd $1
echo "= $1 "
for i in *; do 
[ -d $i ] || continue
out=`cat ${i}/stats | grep -v "snapshot_time" | grep -v "ping" 
|| true`
[ -n "$out" ] || continue
echo $i $out
done
echo 
"="
echo
}

for i in /proc/fs/lustre/mdt/lzfs-MDT /proc/fs/lustre/obdfilter/*OST*; do
dir="${i}/exports"
[ -d "$dir" ] || continue
stats_clear "$dir"
done
echo "Waiting ${SLEEP}s after clearing stats"
sleep $SLEEP

for i in /proc/fs/lustre/mdt/lzfs-MDT/ /proc/fs/lustre/obdfilter/*OST*; do
dir="${i}/exports"
[ -d "$dir" ] || continue
stats_print "$dir"
done




On 5/28/20, 9:28 AM, "lustre-discuss on behalf of Bernd Melchers" 
 wrote:

>I have 2 MDSs and periodically on one of them (either at one time or
>another) peak above 300, causing the file system to basically stop.
>This lasts for a few minutes and then goes away.  We can't identify any
>one user running jobs at the times we see this, so it's hard to
>pinpoint this on a user doing something to cause it.   Could anyone
>point me in the direction of how to begin debugging this?  Any help is
>greatly appreciated.

I am not able to solve this problem, but...
We saw this behaviour (lustre 2.12.3 and 2.12.4) parallel with lustre 
kernel thread
(if i remember: ll_ost_io threads at the ods, but with other messages at
the mds) BUG messages in the
kernel log (dmesg output). At this time the omnipath interface were not
longer pingable. We were not able to say what crashes first, the
omnipath or the lustre parts in the kernel. Perhaps you can have a look
if your mds are pingable from your clients (using the network interface
of your lustre installation). Otherwise it is expected that you get a
high load because your lustre io threads cannot satisfy requests.

Mit freundlichen Grüßen
Bernd Melchers

-- 
Archiv- und Backup-Service | fab-serv...@zedat.fu-berlin.de
Freie Universität Berlin   | Tel. +49-30-838-55905
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org

https://protect2.fireeye.com/v1/url?k=2b5b7e8e-77ee4041-2b5b549b-0cc47adc5e60-f39b4d99025e7043&q=1&e=02c1fc69-2754-4f01-8478-8cef00277511&u=http%3A%2F%2Flists.lustre.org%2Flistinfo.cgi%2Flustre-discuss-lustre.org

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] High MDS load

2020-05-28 Thread Cameron Harr

  
Are you using any Lustre monitoring tools? We use ltop from the
  LMT package (https://github.com/LLNL/lmt) and during that time of
  high load you could see if there are bursts of IOPs coming in.
  Running iotop or iostat might also provide some insight into the
  load if based on I/O.
Cameron

On 5/28/20 8:37 AM, Peeples, Heath
  wrote:


  
  
  
  
I have 2 MDSs and periodically on one of
  them (either at one time or another) peak above 300, causing
  the file system to basically stop.  This lasts for a few
  minutes and then goes away.  We can’t identify any one user
  running jobs at the times we see this, so it’s hard to
  pinpoint this on a user doing something to cause it.   Could
  anyone point me in the direction of how to begin debugging
  this?  Any help is greatly appreciated.
 
Heath
  
  
  
  ___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


  

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre client modules

2020-05-28 Thread Phill Harvey-Smith

On 28/05/2020 17:40, Leonardo Saavedra wrote:

[...]
Remove the 2.9.0 lustre packages, then install 
lustre-client-2.12.4-1.el7.x86_64.rpm and 
kmod-lustre-client-2.12.4-1.el7.x86_64.rpm


Cheers to you and Degremont, Aurelien, who replied saying the same 
earlier, that seemed to fix it.


Phill.


--
This email has been checked for viruses by AVG.
https://www.avg.com

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre client modules

2020-05-28 Thread Leonardo Saavedra

On 5/28/20 2:57 AM, Phill Harvey-Smith wrote:

Right that worked and I have the following rpms in
$HOME/rpmbuild/RPMS/x86_64 :

# ls
kmod-lustre-client-2.12.4-1.el7.x86_64.rpm
lustre-client-2.12.4-1.el7.x86_64.rpm
lustre-client-debuginfo-2.12.4-1.el7.x86_64.rpm
lustre-iokit-2.12.4-1.el7.x86_64.rpm

However trying to install them with yum I get :

Loaded plugins: fastestmirror, langpacks
Examining kmod-lustre-client-2.12.4-1.el7.x86_64.rpm: 
kmod-lustre-client-2.12.4-1.el7.x86_64
Marking kmod-lustre-client-2.12.4-1.el7.x86_64.rpm as an update to 
kmod-lustre-client-2.9.0-1.el7.x86_64
Examining lustre-client-2.12.4-1.el7.x86_64.rpm: 
lustre-client-2.12.4-1.el7.x86_64
Marking lustre-client-2.12.4-1.el7.x86_64.rpm as an update to 
lustre-client-2.9.0-1.el7.x86_64
Examining lustre-client-debuginfo-2.12.4-1.el7.x86_64.rpm: 
lustre-client-debuginfo-2.12.4-1.el7.x86_64

Marking lustre-client-debuginfo-2.12.4-1.el7.x86_64.rpm to be installed
Examining lustre-iokit-2.12.4-1.el7.x86_64.rpm: 
lustre-iokit-2.12.4-1.el7.x86_64
Marking lustre-iokit-2.12.4-1.el7.x86_64.rpm as an update to 
lustre-iokit-2.9.0-1.el7.x86_64

Resolving Dependencies
--> Running transaction check
---> Package kmod-lustre-client.x86_64 0:2.9.0-1.el7 will be updated
--> Processing Dependency: kmod-lustre-client = 2.9.0 for package: 
lustre-client-tests-2.9.0-1.el7.x86_64

Loading mirror speeds from cached hostfile
 * base: centos.serverspace.co.uk
 * epel: lon.mirror.rackspace.com
 * extras: centos.serverspace.co.uk
 * updates: centos.mirrors.nublue.co.uk
--> Processing Dependency: ksym(class_find_client_obd) = 0x7fc892aa 
for package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(class_name2obd) = 0x2a2fe6c0 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(class_register_type) = 0xc4cc2c4f for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64



[...]
Remove the 2.9.0 lustre packages, then install 
lustre-client-2.12.4-1.el7.x86_64.rpm and 
kmod-lustre-client-2.12.4-1.el7.x86_64.rpm



Leo Saavedra
National Radio Astronomy Observatory
http://www.nrao.edu
+1-575-8357033

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] High MDS load

2020-05-28 Thread Chad DeWitt
Hi Heath,

Hope you're doing well!

Your mileage may vary (and quite frankly, there may be better approaches),
but this is a quick and dirty set of steps to find which client is issuing
a large number of metadata operations.:


   - Log into the affected MDS.


   - Change into the exports directory.

cd /proc/fs/lustre/mdt/**/exports/


   - OPTIONAL: Set all your stats to zero and clear out stale clients. (If
   you don't want to do this step, you don't really have to, but it does make
   it easier to see the stats if you are starting with a clean slate. In fact,
   you may want to skip this the first time through and just look for high
   numbers. If a particular client is the source of the issue, the stats
   should clearly be higher for that client when compared to the others.)

echo "C" > clear


   - Wait for a few seconds and dump the stats.

for client in $( ls -d */ ) ; do echo && echo && echo ${client} && cat
${client}/stats && echo ; done


You'll get a listing of stats for each mounted client like so:

open  278676 samples [reqs]
close 278629 samples [reqs]
mknod 2320 samples [reqs]
unlink495 samples [reqs]
mkdir 575 samples [reqs]
rename1534 samples [reqs]
getattr   277552 samples [reqs]
setattr   550 samples [reqs]
getxattr  2742 samples [reqs]
statfs350058 samples [reqs]
samedir_rename1534 samples [reqs]


(Don't worry if some of the clients give back what appears to be empty
stats. That just means they are mounted, but have not yet performed any
metadata operations.) From this data, you are looking for any "high"
samples.  The client with the high samples is usually the culprit.  For the
example client stats above, I would look to see what process(es) on this
client is listing, opening, and then closing files in Lustre... The
advantage with this method is you are seeing exactly which metadata
operations are occurring. (I know there are also various utilities included
with Lustre that may give this information as well, but I just go to the
source.)

Once you find the client, you can use various commands, such as mount and
lsof to get a better understanding of what may be hitting Lustre.

Some of the more common issues I've found that can cause a high MDS load:

   - List a directory containing a large number of files. (Instead, unalias
   ls or better yet, use lfs find.)
   - Remove on many files.
   - Open and close many files. (May be better to move the data over to
   another file system, such as XFS, etc.  We keep some of our deep learning
   off Lustre, because of the sheer number of small files.)

Of course the actual mitigation of the load depends on what the user is
attempting to do...

I hope this helps...

Cheers,
Chad



Chad DeWitt, CISSP

UNC Charlotte *| *ITS – University Research Computing

ccdew...@uncc.edu *| *www.uncc.edu




If you are not the intended recipient of this transmission or a person
responsible for delivering it to the intended recipient, any disclosure,
copying, distribution, or other use of any of the information in this
transmission is strictly prohibited. If you have received this transmission
in error, please notify me immediately by reply email or by telephone at
704-687-7802. Thank you.


On Thu, May 28, 2020 at 11:37 AM Peeples, Heath 
wrote:

> I have 2 MDSs and periodically on one of them (either at one time or
> another) peak above 300, causing the file system to basically stop.  This
> lasts for a few minutes and then goes away.  We can’t identify any one user
> running jobs at the times we see this, so it’s hard to pinpoint this on a
> user doing something to cause it.   Could anyone point me in the direction
> of how to begin debugging this?  Any help is greatly appreciated.
>
>
>
> Heath
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>


smime.p7s
Description: S/MIME Cryptographic Signature
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] High MDS load

2020-05-28 Thread Bernd Melchers
>I have 2 MDSs and periodically on one of them (either at one time or
>another) peak above 300, causing the file system to basically stop.
>This lasts for a few minutes and then goes away.  We can't identify any
>one user running jobs at the times we see this, so it's hard to
>pinpoint this on a user doing something to cause it.   Could anyone
>point me in the direction of how to begin debugging this?  Any help is
>greatly appreciated.

I am not able to solve this problem, but...
We saw this behaviour (lustre 2.12.3 and 2.12.4) parallel with lustre kernel 
thread
(if i remember: ll_ost_io threads at the ods, but with other messages at
the mds) BUG messages in the
kernel log (dmesg output). At this time the omnipath interface were not
longer pingable. We were not able to say what crashes first, the
omnipath or the lustre parts in the kernel. Perhaps you can have a look
if your mds are pingable from your clients (using the network interface
of your lustre installation). Otherwise it is expected that you get a
high load because your lustre io threads cannot satisfy requests.

Mit freundlichen Grüßen
Bernd Melchers

-- 
Archiv- und Backup-Service | fab-serv...@zedat.fu-berlin.de
Freie Universität Berlin   | Tel. +49-30-838-55905
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] High MDS load

2020-05-28 Thread Peeples, Heath
I have 2 MDSs and periodically on one of them (either at one time or another) 
peak above 300, causing the file system to basically stop.  This lasts for a 
few minutes and then goes away.  We can't identify any one user running jobs at 
the times we see this, so it's hard to pinpoint this on a user doing something 
to cause it.   Could anyone point me in the direction of how to begin debugging 
this?  Any help is greatly appreciated.

Heath
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre client modules

2020-05-28 Thread Degremont, Aurelien
Hi Phil,

There are conflicts with your already installed Lustre 2.9.0 packages.
Based on the output you provide, you should remove 'kmod-lustre-client-tests' 
first.

Actually, only kmod-lustre-client and lustre-client are the required. You 
probably don't need the other ones (lustre-iokit, lustre-client-debuginfo, ...).

Remove all the other Lustre packages except for these 2 and try again.
 

Aurélien

Le 28/05/2020 10:57, « lustre-discuss au nom de Phill Harvey-Smith » 
 a écrit :

CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and know the 
content is safe.



On 27/05/2020 19:26, Leonardo Saavedra wrote:
> On 5/26/20 5:47 PM, Phill Harvey-Smith wrote:
> echo "%_topdir  $HOME/rpmbuild" >> .rpmmacros
> wget -c
> 
https://downloads.whamcloud.com/public/lustre/lustre-2.12.4/el7/client/SRPMS/lustre-2.12.4-1.src.rpm
> rpmbuild --clean  --rebuild --without servers --without lustre_tests
> lustre-2.12.4-1.src.rpm
> cd $HOME/rpmbuild/RPMS/x86_64

Right that worked and I have the following rpms in
$HOME/rpmbuild/RPMS/x86_64 :

# ls
kmod-lustre-client-2.12.4-1.el7.x86_64.rpm
lustre-client-2.12.4-1.el7.x86_64.rpm
lustre-client-debuginfo-2.12.4-1.el7.x86_64.rpm
lustre-iokit-2.12.4-1.el7.x86_64.rpm

However trying to install them with yum I get :

Loaded plugins: fastestmirror, langpacks
Examining kmod-lustre-client-2.12.4-1.el7.x86_64.rpm:
kmod-lustre-client-2.12.4-1.el7.x86_64
Marking kmod-lustre-client-2.12.4-1.el7.x86_64.rpm as an update to
kmod-lustre-client-2.9.0-1.el7.x86_64
Examining lustre-client-2.12.4-1.el7.x86_64.rpm:
lustre-client-2.12.4-1.el7.x86_64
Marking lustre-client-2.12.4-1.el7.x86_64.rpm as an update to
lustre-client-2.9.0-1.el7.x86_64
Examining lustre-client-debuginfo-2.12.4-1.el7.x86_64.rpm:
lustre-client-debuginfo-2.12.4-1.el7.x86_64
Marking lustre-client-debuginfo-2.12.4-1.el7.x86_64.rpm to be installed
Examining lustre-iokit-2.12.4-1.el7.x86_64.rpm:
lustre-iokit-2.12.4-1.el7.x86_64
Marking lustre-iokit-2.12.4-1.el7.x86_64.rpm as an update to
lustre-iokit-2.9.0-1.el7.x86_64
Resolving Dependencies
--> Running transaction check
---> Package kmod-lustre-client.x86_64 0:2.9.0-1.el7 will be updated
--> Processing Dependency: kmod-lustre-client = 2.9.0 for package:
lustre-client-tests-2.9.0-1.el7.x86_64
Loading mirror speeds from cached hostfile
  * base: centos.serverspace.co.uk
  * epel: lon.mirror.rackspace.com
  * extras: centos.serverspace.co.uk
  * updates: centos.mirrors.nublue.co.uk
--> Processing Dependency: ksym(class_find_client_obd) = 0x7fc892aa for
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(class_name2obd) = 0x2a2fe6c0 for
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(class_register_type) = 0xc4cc2c4f for
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_add) = 0xc5e4acf5 for package:
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_cancel_records) = 0x72fd39ee
for package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_close) = 0xf83a61a8 for
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_process) = 0x79b2c569 for
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_reverse_process) = 0xd7510c21
for package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cleanup) = 0x0632eadc for package:
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_close) = 0xa6f1cf8b for package:
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(__llog_ctxt_put) = 0xe1c19687 for
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_destroy) = 0xe12c11de for package:
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_exist) = 0xa6594d74 for package:
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_init_handle) = 0xe2107196 for
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_open) = 0x9ba55f56 for package:
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_open_create) = 0xd4bdcea7 for
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_osd_ops) = 0x034860f6 for package:
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_process) = 0x18a1b423 for package:
kmod-lustre-client

Re: [lustre-discuss] Lustre client modules

2020-05-28 Thread Phill Harvey-Smith

On 27/05/2020 19:26, Leonardo Saavedra wrote:

On 5/26/20 5:47 PM, Phill Harvey-Smith wrote:
echo "%_topdir  $HOME/rpmbuild" >> .rpmmacros
wget -c 
https://downloads.whamcloud.com/public/lustre/lustre-2.12.4/el7/client/SRPMS/lustre-2.12.4-1.src.rpm
rpmbuild --clean  --rebuild --without servers --without lustre_tests 
lustre-2.12.4-1.src.rpm

cd $HOME/rpmbuild/RPMS/x86_64


Right that worked and I have the following rpms in
$HOME/rpmbuild/RPMS/x86_64 :

# ls
kmod-lustre-client-2.12.4-1.el7.x86_64.rpm
lustre-client-2.12.4-1.el7.x86_64.rpm
lustre-client-debuginfo-2.12.4-1.el7.x86_64.rpm
lustre-iokit-2.12.4-1.el7.x86_64.rpm

However trying to install them with yum I get :

Loaded plugins: fastestmirror, langpacks
Examining kmod-lustre-client-2.12.4-1.el7.x86_64.rpm: 
kmod-lustre-client-2.12.4-1.el7.x86_64
Marking kmod-lustre-client-2.12.4-1.el7.x86_64.rpm as an update to 
kmod-lustre-client-2.9.0-1.el7.x86_64
Examining lustre-client-2.12.4-1.el7.x86_64.rpm: 
lustre-client-2.12.4-1.el7.x86_64
Marking lustre-client-2.12.4-1.el7.x86_64.rpm as an update to 
lustre-client-2.9.0-1.el7.x86_64
Examining lustre-client-debuginfo-2.12.4-1.el7.x86_64.rpm: 
lustre-client-debuginfo-2.12.4-1.el7.x86_64

Marking lustre-client-debuginfo-2.12.4-1.el7.x86_64.rpm to be installed
Examining lustre-iokit-2.12.4-1.el7.x86_64.rpm: 
lustre-iokit-2.12.4-1.el7.x86_64
Marking lustre-iokit-2.12.4-1.el7.x86_64.rpm as an update to 
lustre-iokit-2.9.0-1.el7.x86_64

Resolving Dependencies
--> Running transaction check
---> Package kmod-lustre-client.x86_64 0:2.9.0-1.el7 will be updated
--> Processing Dependency: kmod-lustre-client = 2.9.0 for package: 
lustre-client-tests-2.9.0-1.el7.x86_64

Loading mirror speeds from cached hostfile
 * base: centos.serverspace.co.uk
 * epel: lon.mirror.rackspace.com
 * extras: centos.serverspace.co.uk
 * updates: centos.mirrors.nublue.co.uk
--> Processing Dependency: ksym(class_find_client_obd) = 0x7fc892aa for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(class_name2obd) = 0x2a2fe6c0 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(class_register_type) = 0xc4cc2c4f for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_add) = 0xc5e4acf5 for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_cancel_records) = 0x72fd39ee 
for package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_close) = 0xf83a61a8 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_process) = 0x79b2c569 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cat_reverse_process) = 0xd7510c21 
for package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_cleanup) = 0x0632eadc for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_close) = 0xa6f1cf8b for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(__llog_ctxt_put) = 0xe1c19687 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_destroy) = 0xe12c11de for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_exist) = 0xa6594d74 for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_init_handle) = 0xe2107196 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_open) = 0x9ba55f56 for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_open_create) = 0xd4bdcea7 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_osd_ops) = 0x034860f6 for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_process) = 0x18a1b423 for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_reverse_process) = 0x4b183427 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_setup) = 0x5029bcff for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(llog_write) = 0x94fd16f4 for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(lu_context_enter) = 0xffa84ad2 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(lu_context_exit) = 0x2d678501 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(lu_context_fini) = 0xf5361e15 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(lu_context_init) = 0x7f95d027 for 
package: kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing Dependency: ksym(lu_env_fini) = 0xc6a207d4 for package: 
kmod-lustre-client-tests-2.9.0-1.el7.x86_64
--> Processing