Hello,
I Am trying to bulid lustre client 2.12.9 on RHEL8
rpmbuild --rebuild --without servers
/root/rpmbuild/SRPMS/lustre-client-dkms-2.12.9-1.el8.src.rpm
configure: error:
You seem to have an OFED installed but have not installed it's devel
package.
If you still want to build Lustre for yo
If you look in the mentioned thread I wrote a patch for it. It was long time
ago I thought somebody could have fixed it already
Here is my post about this issue
http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2021-October/017813.html
> On May 3, 2022, at 5:16 PM, Mannthey, Keith
it is a more general approach.
On 10/21/21 11:38 AM, Franke, Knut wrote:
Hi,
Am Mittwoch, dem 13.10.2021 um 16:06 -0700 schrieb Riccardo Veraldi:
This is my patch to make things works and build the lustre-dkms rpm
Thank you! I just ran into the exact same problem. Two comments on the
patch
yes, same problem for me, I Addressed this a few weeks go and I think I
Reported to the mailing list.
This is my patch to make things works and build the lustre-dkms rpm
diff -ru lustre-2.12.7/lustre-dkms_pre-build.sh
lustre-2.12.7-dkms-pcds/lustre-dkms_pre-build.sh
--- lustre-2.12.7/lustre-d
Hello,
I wanted to ask some hint on how I may increase single process
sequential write performance on Lustre.
I am using Lustre 2.12.7 on RHEL 7.9
I have a number of OSSes with SAS SSDs in raidz. 3 OST per oss and each
OST is made by 8 SSD in raidz.
On a local test with multiple writes I
Thiell wrote:
Hi Riccardo,
I would check if the OSTs on this OSS have been registered with the correct
NIDs (o2ib1) on the MGS:
$ lctl --device MGS llog_print -client
and look for the NIDs in setup/add_conn for the OSTs in question.
Best,
Stephane
On Sep 28, 2021, at 9:52 AM, Riccardo Veraldi
Hello.
I have a lustre setup where the MDS (172.21.156.112) is on tcp1 while
the OSSes are on o2ib1.
I am using Lustre 2.12.7 on RHEL 7.9
All the clients can see the MDS correctly as a tcp1 peer:
peer:
- primary nid: 172.21.156.112@tcp1
Multi-Rail: True
peer ni:
- ni
s kernel
(3.10.0-1160.2.1.el7.x86_64).
I symlinked those directories in 3.10.0-1160.25.1.el7.x86_64/extra to
3.10.0-1160.42.2.el7.x86_64/extra and mounted my OSTs.
= Dirty workaround, if you quickly need 2.12.7, but of course not
sustainable - at some point the kernel version will have deviated
Hello,
I am not successful installing lustre 2.12.7 I run into a problem with
dkms on RHEL 7.9
kernel 3.10.0-1160.42.2.el7.x86_64
I am using rpm from
https://downloads.whamcloud.com/public/lustre/lustre-2.12.7/el7.9.2009/server/RPMS/x86_64/
the lustre dkms module fails building, seems like
Hello,
I am about to deploy a new Lustre 2.12.7 systems.
With ZoL version should I choose for my Lustre/ZFS system ?
0.7.13, 0.8.6, 2.0.5, 2.1.0 ?
Thanks
Rick
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/l
ts way back to the
TCP route.
*From: *Riccardo Veraldi
*Date: *Monday, September 13, 2021 at 3:16 PM
*To: *"Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]"
, "lustre-discuss@lists.lustre.org"
*Subject: *[EXTERNAL] Re: [lustre-discuss] Disabling multi-rail
dynamic dis
I would use configuration on /etc/lnet.conf and I would not use anymore
the older style configuration in
/etc/modprobe.d/lustre.conf
for example in my /etc/lnet.conf configuration I have:
*ip2nets:
- net-spec: o2ib
interfaces:
0: ib0
- net-spec: tcp
interfaces:
0: enp24s0f0
Hello,
I wanted to ask if anybody has experienced running MDS as a virtual
machine while OSSes are physical machines. The benefit would be to have
a kind of intrinsic level of high availability if the underlying
hypervisor/storage infrastructure is a HA cluster, but I was wondering
what do you thin
Hello,
I wanted to ask if anybody is using lustre as a FS backend for virtual
machines. I am thinking to environments like Openstack, or oVirt
where VM are inside a single qcow2 file basically using libvirt to
access the underlying filesystem where VMs are stored.
Anyone is using Lustre for this an
for my experience multipathd+ZFS works well, and it worked well usually.
I just remove the broken disk when it happens, replace it and the new
multipathd device is added once the disk is replaced, and then then I
start resilvering.
Anyway I found out this not always works with some version of JBOD
it was not clear to me you were using 16K block from your email it
seemed like you were writing 16k files.
16k block size is really tiny. The default ZFS block size is 128K but if
you want to get performance out of Lustre/ZFS for big files you should
use 1M block size.
Anyway there are a bunch of o
Are you testing this in a virtual machine environment ?
if you are aiming for Lustre performacne you should not run virtual
machines especially on the OSS side.
Also how many clients are you using for reading/writing ? to get
performance out of lustre or any other parallel filesystem, you need to
2.10 LTS. Automatically done by Lustre for 2.11+)
*De : *lustre-discuss au nom
de "Carlson, Timothy S"
*Date : *mercredi 13 mars 2019 à 23:07
*À : *Riccardo Veraldi , Kurt Strosahl
, "lustre-discuss@lists.lustre.org"
*Objet : *Re: [lustre-discuss] ZFS tuning for MDT/MGS
these are the zfs settings I use on my MDSes
zfs set mountpoint=none mdt0
zfs set sync=disabled mdt0
zfs set atime=off amdt0
zfs set redundant_metadata=most mdt0
zfs set xattr=sa mdt0
if youor MDT partition is on a 4KB sector disk then you can use
ashift=12 when you create the filesystem b
Let me know if disabling discovery on your 2.12 clients work.
yes after disabling discovery on the client side, the situation is much
better
thank you very much
thanks
amir
On Tue, 5 Mar 2019 at 18:49, Riccardo Veraldi
mailto:riccardo.vera...@cnaf.infn.it>>
wrote:
Hello Amir i ans
g on the same 2.12.0 OSS
Do that as part of the initial bring up to make sure 2.12 nodes don't
try to discover peers. Let me know if that resolves your issue?
On Tue, 5 Mar 2019 at 15:09, Riccardo Veraldi
mailto:riccardo.vera...@cnaf.infn.it>>
wrote:
it is not exactly this
erfaces. But then should realize that it can only reach it on
the tcp network, since that's the only network configured on the MDS.
It might help, if you just configure LNet only, on the MDS and the
peer and run a simple
lctl set_param debug=+"net neterror"
lne
doing, if you were using any new features (like DoM or FLR), and full
dmesg from the clients and servers involved in these evictions.
- Patrick
On 3/5/19, 11:50 AM, "lustre-discuss on behalf of Riccardo Veraldi"
wrote:
Hello,
I have quite a big issue on my L
Hello,
I have quite a big issue on my Lustre 2.12.0 MDS/MDT.
Clients moving data to the OSS occur into a locking problem I never met
before.
The clients are mostly 2.10.5 except for one which is 2.12.0 but
regardless the client version the problem is still there.
So these are the errors I
Hello,
Yesterday I upgraded one of my filesystems from Lustre 2.10.5 to Lustre
2.12.0
everything apparently went well. I Also upgraded form zfs 0.7.9 to zfs
0.7.12
I also ahve another cluster with a clean 2.12.0 install and it works
well and performs well
Today after the yesterday's upgr
Hello,
I am planning a Lustre upgrade from 2.10.5/ZFS to 2.12.0/ZFS
any particular cavetat on this procedure ?
Can I simply upgrade the Lustre package and mount the filesystem ?
thank you
Rick
___
lustre-discuss mailing list
lustre-discuss@lists.
Hello,
I have a new Lustre cluster 2.12.0 with 8 OSSes and 24 clients on RHEL76.
I have a problem with lnet behavior.
even thought I configured lnet.conf in this way on every client
ip2nets:
- net-spec: o2ib0
interfaces:
0: ib0
when I check the lnet status on each client, sometimes
I am using Lustre 2.12.0 and seems workign pretty well, anyway I built
it against zfs 0.7.12 libraries... was it a mistake ?
what's the zfs release that Lustre 2.12.0 is built/tested on ?
On 2/22/19 12:18 PM, Peter Jones wrote:
Nathan
Yes 2.12 is an LTS branch. We’re planning on putting out
Hello,
I have to build a bunch of new big Lustre filesystems.
I Was wondering if i should go for 2.12.0 so that it will be simpler in
the future to keep it up to date to 2.12.* family or if I it is better
to opt for 2.10.6 now and upgrade in the future to 2.12.*
Any hints ?
thank you
Rick
i
> did and lctl net down, a lustre_rmmod, and then systemctl restart
> lnet. things seemed to work after that. seems a strange failure
> scenario though
>
> i can't mount the filesystem still, but i think that's a separate issue
>
> On Fri, Feb 1, 2019 at 3:29
Did you install yaml
and yaml-devel ?
>
> On Feb 1, 2019 at 12:20 PM, (mailto:mdidomeni...@gmail.com)> wrote:
>
>
>
> i'm trying to start an lnet client, but lnet kicks out the config with
> a yaml error
>
> yaml:
> - builder:
> errno: -1
> descr: failed t
. Otherwise from a single client
I cannot reach more than 5GB/s. it is not clear to me why.
On 11/8/18 4:58 AM, Martin Hecht wrote:
On 11/7/18 9:44 PM, Riccardo Veraldi wrote:
Anyway I Was wondering if something different is needed for mlx5 and
what are the suggested values in that case ?
Anyone has
Hello,
I did set a bunch of params from the MDS so that they can be taken up by
the Lustre clients
lctl set_param -P osc.*.checksums=0
lctl set_param -P timeout=600
lctl set_param -P at_min=250
lctl set_param -P at_max=600
lctl set_param -P ldlm.namespaces.*.lru_size=2000
lctl set_param -P osc
Hello,
I found that regarding FDR Infiniband if I set ko2iblnd parameters as
below I have quite an increase in performance using mlx4
options ko2iblnd timeout=100 peer_credits=63 credits=2560
concurrent_sends=63 fmr_pool_size=1280 fmr_flush_trigger=1024 ntx=5120
this suggested values comes
ve to seek out the various tuning parameters, and instead get good
performance out of the box.
A few comments inline...
On Oct 19, 2018, at 17:52, Riccardo Veraldi
wrote:
On 10/19/18 12:37 PM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
On Oct 17, 2018, at 7:30 PM, Riccardo Veraldi
wrot
/2018 01:05 PM, Riccardo Veraldi wrote:
Hello,
I have quite a very critical problem.
One of my OSSes hanfs into a kernel panic when trying to mount the OSTs.
After mounting 11 OSTs over 12 total OSTs it goes into kernel panic.
Does not matter hte order in which they are mounted.
Any clue on
I could mount the OSTs the only way though was to mount with abort_recov
thanks to this old ticket
https://jira.whamcloud.com/browse/LU-5040
On 10/30/18 5:05 AM, Riccardo Veraldi wrote:
Hello,
I have quite a very critical problem.
One of my OSSes hanfs into a kernel panic when trying to
Hello,
I have quite a very critical problem.
One of my OSSes hanfs into a kernel panic when trying to mount the OSTs.
After mounting 11 OSTs over 12 total OSTs it goes into kernel panic.
Does not matter hte order in which they are mounted.
Any clue on hints ?
I cannot really recover it and
?
thanks
Rick
On 8/23/18 2:40 PM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
On Aug 22, 2018, at 8:10 PM, Riccardo Veraldi
wrote:
On 8/22/18 3:13 PM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
On Aug 22, 2018, at 3:31 PM, Riccardo Veraldi
wrote:
I would like to migrate this virtual machine to
On 10/19/18 12:37 PM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
On Oct 17, 2018, at 7:30 PM, Riccardo Veraldi
wrote:
anyway especially regarding the OSSes you may eventually need some ZFS module parameters
optimizations regarding vdev_write and vdev_read max to increase those values higher
On 10/17/18 1:20 PM, Kurt Strosahl wrote:
Good Afternoon,
I believe 2.10.* is the long time support.
I am happy with 2.10.5 on my standard performance cluster but for a
very high performance cluster I built 5 months ago where 6GB/s per each OSS
where required in read and write transfers I
On 10/16/18 5:54 AM, Peter Jones wrote:
Have you tried running 2.10.5 on a RHEL6 client?
hello, sorry I did not do that but I think it is my next option.
On 2018-10-16, 2:13 AM, "lustre-discuss on behalf of Riccardo Veraldi"
wrote:
On 10/16/18 2:08 AM, George Mel
Melikov,
Tel. 7-915-278-39-36
Skype: georgemelikov
16.10.2018, 12:03, "Riccardo Veraldi" :
On 10/15/18 4:59 PM, Alexander I Kulyavtsev wrote:
You can do a quick check with 2.10.5 client by mounting lustre on MDS if you
do not have free node to install 2.10.5 client.
Do you have lnet
slow management network?
Alex.
hi, my lnet is configured with both IB and 10GE. it is using IB I
verified it and anyway performance is very slow even if it where just
using tcp on 10GE
since I only get 2MB/s
thanks
On 10/15/18, 6:41 PM, "lustre-discuss on behalf of Riccardo Ve
Hello,
I have a new Lustre FS version 2.10.5. 18 OSTs 18TB each on 3 OSSes.
I noticed very slow performances couple of MB/sec when RHEL6 Lustre
clients 2.8.0 are writing to the fielsystem.
Could it be a Lustre version problem server vs client ?
I have no errors either on server or client si
I suppose you want to say 2.10.*
if you use lustre-client-dkms rpm (or build it from sources) the only
thing you need to do on your clients is to remove the actual
lustre-client-dkms rpm package
and reinstall it after you upgrade the kernel. In this way the lustre
modules will be automatical
Hello,
I have always been using ZFS record size to 1MB with Lustre on top of
ZFS. Anyway it is possible to set it to more than 1MB. May this have
any benefit in performance and is it recommended to set for example:
echo 4194304 > /sys/module/zfs/parameters/zfs_max_recordsize
zfs set records
as for me Lustre 2.10.5 is not building on ZFS 0.7.10
of course it builds fine with ZFS 0.7.9
CC: gcc
LD: /usr/bin/ld -m elf_x86_64
CPPFLAGS: -include /root/rpmbuild/BUILD/lustre-2.10.5/undef.h
-include /root/rpmbuild/BUILD/lustre-2.10.5/config.h
-I/root/rpmbuild/BUIL
Hello,
I am running Lustre 2.10.5 on RHEL 7.5 using the kernel 4.4.157 from elrepo.
everything seems working fine. I Ask if anyone else is running kernel 4
on CENTOS with Lustre, and if this configuration is kind of unsupported
or not recommended for some reason.
I had an hard time with the
Hello,
I wanted to ask some clarifiaction on ldev.conf usage and features.
I am using ldev.conf only on my ZFS lustre OSSes and MDS.
Anyway I hae a doubt on what should go in that file.
I have seen people having only the metadata configuration in it like for
example:
mds01 - mgs zfs:lustre0
here is the reason, it's a CENTOS 7.5 kernel bug
https://bugs.centos.org/view.php?id=15193
On 9/10/18 11:05 PM, Riccardo Veraldi wrote:
hello,
I installed a new Lustre system where MDS and OSSes are version 2.10.5
the lustre clients are running 2.10.1 and 2.9.0
when I try to moun
hello,
I installed a new Lustre system where MDS and OSSes are version 2.10.5
the lustre clients are running 2.10.1 and 2.9.0
when I try to mount the filesystem it fails with these errors:
OSS:
Sep 10 22:39:46 psananehoss01 kernel: LNetError:
10055:0:(o2iblnd_cb.c:2513:kiblnd_passive_connec
Lustre 2.10.5
seems like that lustre-resource-agents has a dependency problem
yum localinstall -y lustre-resource-agents-2.10.5-1.el7.x86_64.rpm
Loaded plugins: langpacks
Examining lustre-resource-agents-2.10.5-1.el7.x86_64.rpm:
lustre-resource-agents-2.10.5-1.el7.x86_64
Marking lustre-resourc
Hello,
I have a virtual machine running on oVirt which is a MDS. I have a mgs
and mdt partition.
ffb11-mgs/mgs 100283136 4096
100276992 1% /lustre/local/mgs
ffb11-mdt0/mdt0 100269952 558976
99708928 1% /lustre/local/mdt0
this got fixed the problem was a corrupt ldev.conf file.
On 7/31/18 12:41 AM, Riccardo Veraldi wrote:
Hello,
my lustre server 2.9.0 suddently won't mount anymore lustre partitions
after a power outage.
ZFS pool is active (Resilvering one disk). Anyway systemctl start
lustre is not worki
Hello,
my lustre server 2.9.0 suddently won't mount anymore lustre partitions
after a power outage.
ZFS pool is active (Resilvering one disk). Anyway systemctl start lustre
is not working.
I do not see any error message it just does not mount my Lustre OSSes
partions.
NAME USED
>
> Regards.
>
>
> Fernando Pérez
> Institut de Ciències del Mar (CMIMA-CSIC)
> Departament Oceanografía Física i Tecnològica
> Passeig Marítim de la Barceloneta,37-49
> 08003 Barcelona
> Phone: (+34) 93 230 96 35
>
Hello,
after a power outage I had one of my OSTs (total of 60) in an unhappy state.
Lustre version 2.4.1
I ran then a FS check and here follows:
e2fsck 1.42.7.wc1 (12-Apr-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivit
lient). From your example:
>
> lctl set_param -P osc.*.checksums=0
>
> Will execute “set_param osc.*.checksums=0” on all targets.
>
> Best regards,
> Artem Blagodarenko.
>
>> On 23 May 2018, at 00:11, Riccardo Veraldi
>> wrote:
>>
>> Hello,
>>
>
On 5/22/18 11:44 PM, Dilger, Andreas wrote:
> On May 22, 2018, at 15:15, Riccardo Veraldi
> wrote:
>> hello,
>>
>> how to set ptrlpcd parameters at boot time ?
>>
>> instead of
>>
>> echo 32 > /sys/module/ptlrpc/parameters/max
hello,
how to set ptrlpcd parameters at boot time ?
instead of
echo 32 > /sys/module/ptlrpc/parameters/max_ptlrpcds
echo 3 > /sys/module/ptlrpc/parameters/ptlrpcd_bind_policy
I tried to load from /etd/modprobe.d/ptlrpc.conf
options ptlrpcd max_ptlrpcds=32
options ptlrpcd ptlrpcd_bind_policy=3
Hello,
how do I set_param in a persistent way on the lsutre clinet side so that
it has not to be set every time after reboot ?
Not all of these parameters can be set on the MDS, for example the osc.* :
lctl set_param osc.*.checksums=0
lctl set_param timeout=600
lctl set_param at_min=250
lctl set
Hello,
So far I am not able to solve this problem on my Lustre setup.
I can reach very good performance with multi threaded writes or reads,
that are sequential writes and sequential reads at different times.
I can saturate Infiniband FDR capabilities reaching 6GB/s.
The problem rises when while wr
Andreas
>
>> On May 8, 2018, at 18:24, Riccardo Veraldi
>> wrote:
>>
>> Hello,
>> on my tunning lustre 2.11.0 testbed I cannot find anymore /proc/sys/lnet
>> was very handy to look at /proc/sys/lnet/peers nd /proc/sys/
Hello,
I have problems with my lnet configuration on lustre 2.11.0
everything starts just fine but after a while lnet auto discovers peers
and it adds the tcp network interface of my OSSes and clients
so that clients start to write on lustre partition using tcp and no more
o2ib.
I use and need tcp
Hello,
on my tunning lustre 2.11.0 testbed I cannot find anymore /proc/sys/lnet
was very handy to look at /proc/sys/lnet/peers nd /proc/sys/lnet/nis
has this been moved somewhre else ?
thank you
Rick
___
lustre-discuss mailing list
lustre-discuss@lis
eading at the same time to/from the same file ?
the timeouts are really huge.
thanks
Rick
On 4/25/18 2:49 PM, Riccardo Veraldi wrote:
> Hello,
> I am having a quite serious problem with the lock manager.
> First of all we are using Lustre 2.10.3 both on server and client side
> on
Hello,
I am having a quite serious problem with the lock manager.
First of all we are using Lustre 2.10.3 both on server and client side
on RHEL7.
The only difference beween servers and clients is that lustre OSSes have
kernel 4.4.126 while clients have stock RHEL7 kernel.
We have NVMe disks on th
Hello,
just wondering if who is using lustre for home directories with several
users is happy or not.
I am considering to move home directories from NFS to Lustre/ZFS.
it is quite easy to send the NFS server in troubles with just a few
users copying files around.
What special tuning is needed to op
on x.y.z)
yes I was using 2.9.59 because it was fixing a bug on data corruption.
thanks
>
>
>
> On 2018-04-23, 7:08 PM, "lustre-discuss on behalf of Riccardo Veraldi"
> riccardo.vera...@cnaf.infn.it> wrote:
>
>> I tried older client 2.9.59 and still have the
I tried older client 2.9.59 and still have the same problem.
I think there may be a problem with RHEL75.
Anyone is using Lustre client 2.10.* on RHEL75 ?
thank you
Rick
On 4/23/18 6:39 PM, Riccardo Veraldi wrote:
> Hello,
>
> I upgraded some of my clients to RHEL75 and Lustre 2.10.3
Hello,
I upgraded some of my clients to RHEL75 and Lustre 2.10.3
Now I can mount all my lustre FS which are 2.9.0 and older (down to 2.4
) but I cannot see the directories.
ls: reading directory /lfs01: Not a direcory
total 0
on another client with Lustre 2.10.1 I can mount and use the files
I figured out the problem was because of a messed up mgs partition on my
MDS.
thanks
On 4/19/18 7:18 PM, Riccardo Veraldi wrote:
> Hello,
> I have on my OSSes and on my clients the lnet configuration is loaded at
> boot time form lnet.conf
> I define local interfaces and peers.
>
Hello,
I have on my OSSes and on my clients the lnet configuration is loaded at
boot time form lnet.conf
I define local interfaces and peers.
What happens is that when the lustre filesystems are mounted by the
clients lnet is modified both on client and OSS side and tcp peers are
added at the end
this is what I do.
lnetctl export > /etc/lnet.conf
systemctl enable lnet
On 4/17/18 1:37 PM, Kurt Strosahl wrote:
> I configured an lnet router today with luster 2.10.3 as the lustre software.
> I then connfigured the lnet router using the following lnetctl commands
>
>
> lnetctl lnet config
k both ways.
>
> I'm a bit puzzled by the last observation. I expected that both ends
> needed to define peers? The client NID does not show as multi-rail
> (lnetctl peer show) on the server.
>
> Cheers,
> Hans Henrik
>
> On 14-03-2018 03:00, Riccardo Veraldi wr
drops
dramatically. Especially when using a Lustre on raidz.
so I Was wondering if there is any RPC parameter setting that I need to
set to get better performances out of Lustre ?
thank you
On 4/9/18 4:15 PM, Dilger, Andreas wrote:
> On Apr 6, 2018, at 23:04, Riccardo Veraldi
> wrote:
ts.lustre.org>> on behalf of
> Jones, Peter A <mailto:peter.a.jo...@intel.com>>
> *Sent:* Monday, April 9, 2018 7:24:52 AM
> *To:* Dilger, Andreas; Riccardo Veraldi
> *Cc:* lustre-discuss@lists.lustre.org
> <mailto:lustre-discuss@lists.lustre.org>
the
>> configure/build step for Lustre with that kernel, and then check Jira/Gerrit
>> for tickets for each build failure you hit.
>>
>> It may be that there are some unlanded patches that can get you a running
>> client.
>>
>> Cheers, Andreas
>>
Hello,
if I would like to use kernel 4.* from elrepo on RHEL74 for the lustre
OSSes what is the latest supported kernel 4 version by Lustre ?
thank you
Rick
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/list
So I'm struggling since months with these low performances on Lsutre/ZFS.
Looking for hints.
3 OSSes, RHEL 74 Lustre 2.10.3 and zfs 0.7.6
each OSS has one OST raidz
pool: drpffb-ost01
state: ONLINE
scan: none requested
trim: completed on Fri Apr 6 21:53:04 2018 (after 0h3m)
config:
it works for me but you have to set up correctly lnet.conf either
manually or using lnetctl to add peers. Then you export your
configuration in lnet.conf
and it will be loaded at reboot. I had to add my peers manually, I think
peer auto discovery is not yet operational on 2.10.3.
I suppose you are
gt;
> Bruno.
>
>
>> On Feb 21, 2018, at 3:11 AM, Riccardo Veraldi
>> wrote:
>>
>> Hello.
>>
>> I have problems installing the lustre-dkms package for Lustre 2.10.3
>> after building it from SRPMS.
>>
>> the same problem occurs with lus
Hello.
I have problems installing the lustre-dkms package for Lustre 2.10.3
after building it from SRPMS.
the same problem occurs with lustre-dkms-2.10.3-1.el7.noarch.rpm
downloaded from the official Lustre repo.
There is an error and the lustre module is not built.
RHEL74 Linux 3.10.0-693.17.1
Hello,
is Project quota supported on Lustre 2.10.3/ZFS.
Apparently LU-7991 adds it as a feature.
Anyone may confirm it does work ?
thank you
Riccardo
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/
rather
> than combined. 48 OSTs and about 70 clients. Pretty basic config.
> Fingers crossed on more similar success stories.
>
>
> Cheers,
>
> Scott
>
> --------
> *From:* Riccardo Veraldi
&g
I am running at the moment 2.10.1 clients with any server version down
to 2.5 without troubles. I know that there is no warranty of full
interoperability bot so far I did not have problems.
Not sure if you can run 2.8 on Centos 7.4. You can try to git clone the
latest source code from 2.8.* and see
Hello,
are you using Infiniband ?
if so what are the peer credit settings ?
cat /proc/sys/lnet/nis
cat /proc/sys/lnet/peers
On 12/3/17 8:38 AM, E.S. Rosenberg wrote:
> Did you find the problem? Were there any useful suggestions off-list?
>
> On Wed, Nov 29, 2017 at 1:34 PM, Charles A Taylor <
I had a problems between 2.9.0 servers and 2.10.1 clients.
Actually from 2.10.1 clients I can list directories and files on my
Lustre FS OSTs, but I am not able to access any file for reading or writing.
I solved the problem decreasing the number of peer credits in ko2iblnd
to 8 instead of 128, cli
Just curious to ask if anyone lse is using Lustre in multi-rail
configuration ?
thanks
Rick
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Hello.
Here I am again trying to have multi-rail work.
I configured multi-rail on OSS and clients side.
I have one OSS, one MDS and one client, RHEL74 and Lustre 2.10.1:
* psdrp-tst-mds10 MDS
* drp-tst-oss10 OSS (172.21.52.86@o2ib 172.21.52.118@o2ib)
* drp-tst-lu10 Lustre client (172.21
end up with that. Sorry but I did not understand
if 2.10.1 is officially out or if it is release candidate.
thanks
>
> Cheers, Andreas
>
> On Sep 27, 2017, at 21:22, Riccardo Veraldi
> wrote:
>> Hello.
>>
>> I configure Multi-rail on my lustre environment.
>&
Just out of curiosity how much is recommended to run a MDS on a virtual
machine (oVirt) ?
Are there any performance comparison/testing available ?
thanks
On 9/28/17 9:49 AM, Dilger, Andreas wrote:
> On Sep 28, 2017, at 04:54, forrest.wc.l...@dell.com wrote:
>> Hello :
>>
>> Our customer is go
Hello.
I configure Multi-rail on my lustre environment.
MDS: 172.21.42.213@tcp
OSS: 172.21.52.118@o2ib
172.21.52.86@o2ib
Client: 172.21.52.124@o2ib
172.21.52.125@o2ib
[root@drp-tst-oss10:~]# cat /proc/sys/lnet/peers
nid refs state last max rtr min
ah ok the answer is already in your message sorry :)
On 9/26/17 4:02 PM, Riccardo Veraldi wrote:
> On 9/26/17 3:12 PM, Thomas Roth wrote:
>> I don't know about RHEL74, but got it to work on CentOS 7.4 (kernel
>> 3.10.0-693.el7).
>>
>> After ZFS was working on
git from source code ?
thanks
Rick
> The kernel-abi-whitelists is not installed there.
>
> Regards,
> Thomas
>
> On 06.09.2017 04:02, Riccardo Veraldi wrote:
>> I have the kabi whitelist package on hte system:
>>
>> kernel-abi-whitelists-3.10.0-693.1.1.el7.noar
I add these are the ZFS packages which are on the system
libzfs2-0.7.1-1.el7_4.x86_64
zfs-release-1-5.el7_4.noarch
zfs-0.7.1-1.el7_4.x86_64
zfs-dkms-0.7.1-1.el7_4.noarch
libzfs2-devel-0.7.1-1.el7_4.x86_64
On 9/5/17 7:02 PM, Riccardo Veraldi wrote:
> I have the kabi whitelist package on
ware --with baseonly \
> --without kabichk \
> --define "buildid _lustre" \
> --target x86_64 \
> $_TOPDIR/SPECS/kernel.spec
>
> Malcolm.
>
> On 6/9/17, 10:30 am, "lustre-discuss on behalf of Riccardo Veraldi"
> riccardo.vera...@cnaf.infn.it> wrote:
&g
Hello,
is it foreseen that Lustre 2.10.* will be compatible with RHEL74 ?
I tried lustre 2.10.52 but it complains abotu kABI.
thank you
Rick
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-d
t; > # hop: -1
>
> > # priority: 0
>
> > # peer:
>
> > # - primary nid: 192.168.1.2@o2ib
>
> > # Multi-Rail: True
>
> > # peer ni:
>
> > # - nid: 192.168.1.2@o2ib
>
> > # - nid: 192.168.2.2@
1 - 100 of 174 matches
Mail list logo