Hi,
Lustre 2.12.2.
We are seeing lots of errors on the servers such as:
Oct 5 11:16:48 oss04 kernel: LNetError:
6414:0:(lib-move.c:2955:lnet_resend_pending_msgs_locked()) Error sending PUT to
12345-172.19.171.15@o2ib1: -125
Oct 5 11:16:48 oss04 kernel: LustreError:
Hi,
We want to change the service node of an OST. We think this involves:
1. umount the OST
2. tunefs.lustre --erase-param failover.node
--servicenode=172.18.100.1@o2ib,172.17.100.1@tcp pool1/ost1
Is this all? Unclear from the documentation whether a writeconf is
required (if it is, then
Hi Francois,
We had something similar a few months back - I suspect a bug somewhere.
Basically files weren't getting removed from the OST. Eventually, we
mounted as ext, and removed them manually, I think.
A reboot of the file system meant that rm operations then proceeded
correctly after
Hi all,
We had a problem with one of our MDS (ldiskfs) on Lustre 2.12.6, which we
think is a bug - but haven't been able to identify it. Can anyone shed
any light? We unmounted and remounted the mdt at around 23:00.
Client logs:
May 16 22:15:41 m8011 kernel: LustreError: 11-0:
Hi,
We have OSDs on ZFS (0.7.9) / Lustre 2.12.6.
Recently, one of our JBODs had a wobble, and the disks (as presented to
the OS) disappeared for a few seconds (and then returned).
This upset a few zpools which SUSPENDED.
A zpool clear on these then started the resilvering process, and zpool
Hi all,
Thanks for the replies. The issue as I see it is with sending data from
an OST to the client, avoiding the inter-CPU link.
So, if I have:
cpu1 - IB card 1 (10.0.0.1), nvme1 (OST1)
cpu2 - IB card 2 (10.0.0.2), nvme2 (OST2)
Both IB cards on the same subnet. Therefore, by default,
Hi,
We are installing some new Lustre servers with 2 InfiniBand cards, 1
attached to each CPU socket. Storage is nvme, again, some drives attached
to each socket.
We want to ensure that data to/from each drive uses the appropriate IB
card, and doesn't need to travel through the inter-cpu