Hello,
I'd look further into the tunables you've listed to begin with:
tunables:
peer_timeout: 180
peer_credits: 8
peer_buffer_credits: 0
credits: 256
peercredits_hiw: 4
map_on_demand: 1
concurrent_sen
Hi Michael!
Am 21.09.22 um 16:54 schrieb Michael Weiser via lustre-discuss:
> That leaves the question, where that extended attribute user.lustre.lov is
> coming from. It appears that Lustre itself exposes a number of extended
> attributes for every file which reflect internal housekeeping data:
Hello Jeremy,
Hello Christian,
Hi all,
> >Resident wild-guy Christian here. In summary, I'm seeing an odd problem
> >from both Windows Explorer and Mac Finder when copy-pasting a large batch
> >of files to a Lustre-backed SMB share wherein the file manager appears to
> >enumerate over all the file
Thanks Robert for the feedback. Actually, I do not know about Lustre at
all.
I am also trying to contact the engineer who built the Lustre system for
more information regarding the drive information.
To my knowledge, the LustreMDT pool is a 4 SSD disk group (named
/dev/mapper/SSD) with hardware RAI
Hello,
I Am trying to bulid lustre client 2.12.9 on RHEL8
rpmbuild --rebuild --without servers
/root/rpmbuild/SRPMS/lustre-client-dkms-2.12.9-1.el8.src.rpm
configure: error:
You seem to have an OFED installed but have not installed it's devel
package.
If you still want to build Lustre for yo
Hi all,
In some lustre doc, it says when file closes, mds can update size,
and when there is no open for write, mds has the authoritative file
size.
But for NFSv3, it has no open and close operation, so how mds
manager size when use NFSv3.
___
lustre