There are several confusing/misleading comments on this thread that need to be
clarified...
On Oct 2, 2019, at 13:45, Hebenstreit, Michael
mailto:michael.hebenstr...@intel.com>> wrote:
http://wiki.lustre.org/Lustre_Tuning#Number_of_Inodes_for_MDS
Note that I've updated this page to reflect cur
Dear Lustre Fans,
I’m writing to alert you to the upcoming Lustre Community BoF session at SC19
and to ask for your help in shaping the session, so we can address topics from
the community.
The Lustre Community BoF will take place at 5:15 PM on Tuesday, November 19th
at SC19 in Denver at the
HI All,
I was trying to build Lustre 2.12.2 on CentOS 7.6 with the recent update of the
kernel available for it (3.10.0-957.27.2.el7_lustre.x86_64; I have patched its
sources according to the instructions at wiki.lustre.org). I am using vanilla
Infiniband drivers with rdma-core 25.1 user land
> On Oct 2, 2019, at 3:45 PM, Hebenstreit, Michael
> wrote:
>
> and I'd like to use --mkfsoptions='-i 1024' to have more inodes in the MDT.
> We already run out of inodes on that FS (probably due to an ZFS bug in early
> IEEL version) - so I'd like to increase #inodes if possible
I don’t th
Hi Micheal,
With 1K inodes you won't have space to accommodate new features, IIRC the
current minimal limit on modern lustre is 2K now. If you're running out of
MDT space you might consider DNE and multiple MDT's to accommodate that
larger name space.
-cf
On Wed, Oct 2, 2019 at 1:54 PM Hebenstr
http://wiki.lustre.org/Lustre_Tuning#Number_of_Inodes_for_MDS
and I'd like to use --mkfsoptions='-i 1024' to have more inodes in the MDT. We
already run out of inodes on that FS (probably due to an ZFS bug in early IEEL
version) - so I'd like to increase #inodes if possible
-Original Messa
> On Oct 2, 2019, at 1:08 PM, Hebenstreit, Michael
> wrote:
>
> Could anyone point out to me what the downside of having an inode size of 1k
> on the MDT would be (compared to the 4k default)?
Are you talking about the inode size, or the “-i” option to mkfs.lustre (which
actually controls t
This is for an archiving system (1PB) with no performance or other special
requirements but a lot of small files (I know, not a Lustre speciality)
Thanks
Michael
From: Colin Faber
Sent: Wednesday, October 02, 2019 11:19
To: Hebenstreit, Michael
Cc: lustre-discuss@lists.lustre.org
Subject: Re:
Anything in dmesg? We need to know _why_ the network failed to start.
Chris Horn
From: Kurt Strosahl
Date: Wednesday, October 2, 2019 at 1:55 PM
To: Chris Horn , "lustre-discuss@lists.lustre.org"
Subject: Re: [lustre-discuss] Lustre rpm install creating a file that breaks
lustre
the lnet mod
the lnet modules load, but when I start the lnet service it says that the
network is down. I backed everything out, removed the file, and then started
the lnet service again and it worked properly.
From: Chris Horn
Sent: Wednesday, October 2, 2019 2:48 PM
To: K
Might be best to open a ticket for this. What was the nature of the failure?
Chris Horn
From: lustre-discuss on behalf of
Kurt Strosahl
Date: Wednesday, October 2, 2019 at 1:30 PM
To: "lustre-discuss@lists.lustre.org"
Subject: [lustre-discuss] Lustre rpm install creating a file that breaks lu
Good Afternoon,
While getting lustre 2.10.8 running on a RHEL 7.7 system I found that the
RPM install was putting a file in /etc/modprobe.d that was preventing lnet from
starting properly.
the file is ko2iblnd.conf, which contains the following...
alias ko2iblnd-opa ko2iblnd
options ko2ibl
It's too small to accommodate new features
On Wed, Oct 2, 2019 at 11:08 AM Hebenstreit, Michael <
michael.hebenstr...@intel.com> wrote:
> Could anyone point out to me what the downside of having an inode size of
> 1k on the MDT would be (compared to the 4k default)?
>
>
>
> Thanks
>
> Michael
>
>
Could anyone point out to me what the downside of having an inode size of 1k on
the MDT would be (compared to the 4k default)?
Thanks
Michael
Michael Hebenstreit Senior Cluster Architect
Intel Corporation, M
Does anyone on this list have experience running lustre clients on Virtual
Guests running in a QEMU/KVM environment (using CentOS 7)?
The desired configuration for the base host is Mellanox Connect-X 5 for 100GB
Ethernet, but the KVM Guests are running over a Bridged Ethernet using the
virtio
15 matches
Mail list logo