You can use "lctl dk" to dump the kernel debug log on the MDS/OSS nodes, and
grep for the LFSCK messages, but if there are lots of messages the kernel logs
would not be enough to hold them all.
Another option is to enable "lctl set_param printk=+lfsck" on the MDS and OSS
and have it print
If you only have those two processor models to choose from I’d do the 5217
for MDS and 5218 for OSS. If you were using ZFS for a backend definitely
the 5218 for the OSS. With ZFS your processors are also your RAID
controller so you have the disk i/o, parity calculation, checksums and ZFS
threads
Hello Jeff,
Thanks for your quick answer. We plan to use ldiskfs, but I would be interested
to know what could fit for zfs.
Simon
> De: "Jeff Johnson"
> À: "Simon Legrand"
> Cc: "lustre-discuss"
> Envoyé: Jeudi 4 Juillet 2019 20:40:40
> Objet: Re: [lustre-discuss] Frequency vs Cores for
Simon,
Which backend do you plan on using? ldiskfs or zfs?
—Jeff
On Thu, Jul 4, 2019 at 10:41 Simon Legrand wrote:
> Dear all,
>
> We are currently configuring a Lustre filesystem and facing a dilemma. We
> have the choice between two types of processors for an OSS and a MDS.
> - Intel Xeon
Dear all,
We are currently configuring a Lustre filesystem and facing a dilemma. We have
the choice between two types of processors for an OSS and a MDS.
- Intel Xeon Gold 5217 3GHz, 11M Cache,10.40GT/s, 2UPI, Turbo, HT,8C/16T (115W)
- DDR4-2666
- Intel Xeon Gold 5218 2.3GHz, 22M
We encountered this in testing done time ago and already have a bug filed
(don't recall the number right now) and should have a patch soonish if not
already. The gist of the problem is changelog registration limits (interger
type) and some padding resulting in an artificially low limit.
On Thu,
On Wed, Jul 3, 2019 at 2:15 PM Kurt Strosahl wrote:
>
> Hopefully a simple question... If I run lctl lfsck_start is there a place
> where I can get a list of what it did?
>
>
Kurt,
As far as I know, this is still an open feature request...
https://jira.whamcloud.com/browse/LU-5202 (LFSCK 5:
Hi all,
when adding an MDT2 to a system with MGS+MDT0 and MDT1, there was an
interruption, the MGS at first
reported
LustreError: 140-5: Server hebe-MDT0002 requested index 2, but that index is
already
in use. Use --writeconf to force
LustreError: 30446:0:(mgs_handler.c:535:mgs_target_reg())
I just tried out this configuration and was able to reproduce what Scott
saw on 2.12.2.
I couldn't see a Jira ticket for this though so I've opened one a new
one: https://jira.whamcloud.com/browse/LU-12506
Cheers,
--
Matt Rásó-Barnett
University of Cambridge
On Wed, May 22, 2019 at