Are you sure the fabric is up when lnet starts at boot? Double check the order
your services start and be sure Lnet waits for the fabric/network before
starting.
Thanks,
Keith
> -Original Message-
> From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On
> Behal
Hello,
I have built and installed lustre client 2.10.4-1 with centos 7.3
(3.10.0-514.el7.x86_64) and on reboot lnet fails with:
root@scissd1801:~] systemctl status lnet.service
● lnet.service - lnet management
Loaded: loaded (/usr/lib/systemd/system/lnet.service; enabled; vendor
preset: di
> On Aug 13, 2018, at 2:25 PM, David Cohen
> wrote:
>
> the fstab line I use for mounting the Lustre filesystem:
>
> oss03@tcp:oss01@tcp:/fsname /storagelustre flock,user_xattr,defaults
>0 0
OK. That looks correct.
> the mds is also configured for failover (unsuccessfully) :
>
the fstab line I use for mounting the Lustre filesystem:
oss03@tcp:oss01@tcp:/fsname /storagelustre
flock,user_xattr,defaults0 0
the mds is also configured for failover (unsuccessfully) :
tunefs.lustre --writeconf --erase-params --fsname=fsname --mgs
--mountfsoptions='user_xattr,error
> On Aug 13, 2018, at 7:14 AM, David Cohen
> wrote:
>
> I installed a new 2.10.4 Lustre file system.
> Running MDS and OSS on the same servers.
> Failover wasn't configured at format time.
> I'm trying to configure failover node with tunefs without success.
> tunefs.lustre --writeconf --erase-
Ah, yes, thank you.
The goodies of systemd!
Regards,
Thomas
On 08/13/2018 02:10 PM, Julio Pedraza wrote:
>
> Hi,
>
> As you are in CentOs 7 maybe try to got through :
>
> |# journalctl |grep -E "Lustre|LustreErrors|LNet|LDISKFS|ustre"|
>
> and see if you got what you need
>
> regards,
> J.
>
Hi,
As you are in CentOs 7 maybe try to got through :
|# journalctl |grep -E "Lustre|LustreErrors|LNet|LDISKFS|ustre"|
and see if you got what you need
regards,
J.
On 08/13/2018 02:00 PM, Thomas Roth wrote:
Hi all,
we have this rather rare phenomenon of too few Lustre log entries - it woul
Hi all,
we have this rather rare phenomenon of too few Lustre log entries - it would
seem.
This is a cluster running Lustre 2.10.4 on CentOS 7.4
I do not think I have done anything to deviate from the defaults - neither with
Lustre nor
rsyslogd-Config.
However,
# dmesg | grep LustreError | wc -
Hi
I installed a new 2.10.4 Lustre file system.
Running MDS and OSS on the same servers.
Failover wasn't configured at format time.
I'm trying to configure failover node with tunefs without success.
tunefs.lustre --writeconf --erase-params --param="ost.quota_type=ug"
--mgsnode=oss03@tcp --mgsnode=o