To the person with the power,
I've been trying to search the lustre-discuss
(http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/) archives
but it seems only old (<= 2013 perhaps) messages are searchable with the
"Search" box. Is it possible to re-index the searchable DB to include
re
ment was scrubbed...
> URL: <
> http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20200715/e57d1c6f/attachment-0001.html
> >
>
> --
>
>
> Message: 3
> Date: Wed, 15 Jul 2020 23:45:57 +0900
> From: Jongwoo Han
> To: ???
&
On Jul 15, 2020, at 12:29 AM, 肖正刚 wrote:
>
> Hi, all
> Is there a ceiling for a Lustre filesystem that can be mounted in a cluster?
> If so, what's the number?
> If not, how much is proper?
> Does mount multiple filesystems can affect the stability of each file system
> or cause other problems?
On Jul 15, 2020, at 6:07 PM, Cameron Harr wrote:
>
> To the person with the power,
>
> I've been trying to search the lustre-discuss
> (http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/) archives but
> it seems only old (<= 2013 perhaps) messages are searchable with the "Search"
>
This is the trace up to the LBUG
[5533797.889690] Lustre: Skipped 341 previous similar messages
[5533958.749284] LustreError: 105499:0:(tgt_grant.c:571:tgt_grant_incoming())
lustre19-OST002c: cli 901dcd33-cf45-dad4-a0c7-89b9a1fb91b6/99656aa5a800
dirty 0 pend 0 grant -1310720
[5533958.754365]
Hello!
On Wed, Jul 15, 2020 at 5:28 PM Kurt Strosahl wrote:
> Good Morning,
>
>Yesterday one of our lustre file servers rebooted several times. the
> crash dump showed:
>
can you please provide a failed lustre assert message just above the kernel
panic message ?
Thanks,
Zam.
> [14333982
I think your question is ambiguous.
What ceiling do you mean? Total storage capacity? number of disks? number
of clients? number of filesystems?
Please be more clear about it.
Regards,
Jongwoo Han
2020년 7월 15일 (수) 오후 3:29, 肖正刚 님이 작성:
> Hi, all
> Is there a ceiling for a Lustre filesystem that
Good Morning,
Yesterday one of our lustre file servers rebooted several times. the crash
dump showed:
[14333982.153989] Pid: 381367, comm: ll_ost_io01_076
3.10.0-957.10.1.el7_lustre.x86_64 #1 SMP Tue Apr 30 22:18:15 UTC 2019
[14333982.153989] Kernel panic - not syncing: LBUG
[14333982.15399