Hi,
We are mounting a 2.12.2 server on a 2.12.3 client.
Both have @o2ib and @tcp networks.
However, there is no link between them on the @o2ib.
We are therefore mounting on @tcp, have network=tcp0 in the fstab, and
have switched discovery off.
This works, metadata can be read and created, and
Hi,
I wish to perform a zfs backend file system level backup. The Lustre
manual says to to:
lctl set_param osd-zfs.${fsname}-${target}.index_backup=1
Does this have to be done right at the start before the file system has
been used, or can it be done to a live file system with data already on
> Stephane
>
>> On Nov 20, 2019, at 7:29 AM, BASDEN, ALASTAIR G.
>> wrote:
>>
>> Hi,
>>
>> We have a new 2.12.2 system, and are seeing fairly frequent lockups on the
>> primary mds. We get messages such as:
>>
>> Nov 20 14:24:12 c6mds1 ke
Hi,
We have a new 2.12.2 system, and are seeing fairly frequent lockups on the
primary mds. We get messages such as:
Nov 20 14:24:12 c6mds1 kernel: LustreError:
38853:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer
expired after 150s: evicting client at 172.18.122.165@o2ib n
/lustre/test-ost0
...
Thanks for the responses.
Cheers,
Alastair.
On Thu, 25 Jul 2019, BASDEN, ALASTAIR G. wrote:
> Hi,
>
> I am trying to bring up a new zfs backend file system.
> CentOS 7.4, Lustre 2.10.3, zfs 0.7.12.
>
> I do the following:
> zpool create -O canmount=off -
Hi,
I am trying to bring up a new zfs backend file system.
CentOS 7.4, Lustre 2.10.3, zfs 0.7.12.
I do the following:
zpool create -O canmount=off -o cachefile=none test-ost0 raidz2
mkfs.lustre --fsname=test --ost --backfstype=zfs --index=0 --mgsnode=nid1
test-ost0/ost0
This seems to work