Andrew
You will need to use 2.12.x (and 2.12.1 is in final release testing so would be
a good bet if you can wait a short while)
Peter
From: lustre-discuss on behalf of
Andrew Elwell
Date: Wednesday, April 24, 2019 at 7:31 PM
To: "lustre-discuss@lists.lustre.org"
Subject: [lustre-discuss]
Hi folks,
I remember seeing a press release by DDN/Whamcloud last November that they
were going to support ARM, but can anyone point me to the current state of
client?
I'd like to deploy it onto a raspberry pi cluster (only 4-5 nodes) ideally
on raspbian for demo / training purposes. (Yes I know
Hi,
you seem to be able to reproduce this fairly easily.
If so, could you please boot with the "slub_nomerge" kernel parameter
and then reproduce the (apparent) memory leak.
I'm hoping that this will show some other slab that is actually using
the memory - a slab with very similar
On Mon, Apr 15, 2019 at 9:18 PM Jacek Tomaka wrote:
>
> >signal_cache should have one entry for each process (or thread-group).
>
> That is what i thought as well, looking at the kernel source, allocations
> from
> signal_cache happen only during fork.
>
>
I was recently chasing an issue with
Does Lustre provide an optimized stat("filename", ...) that requires fewer
RPCs than fd=open("filename", ...); stat(fd); ? If so, are there any
descriptions of this optimization?
thanks,
kevin
___
lustre-discuss mailing list
Hi all,
OS=CentOS 7.5
Lustre 2.10.6
One of the OSS (one OST only) was upgraded to zfs 0.7.13, and LU-11507 forced
an upgrade of Lustre to 2.12
Mounts, reconnects, recovers, but then is unusable, and the MDS reports:
Lustre: 13650:0:(mdt_handler.c:5350:mdt_connect_internal()) test-MDT: