[lustre-discuss] Can we re-index the lustre-discuss archive DB?

2020-07-15 Thread Cameron Harr

To the person with the power,

I've been trying to search the lustre-discuss 
(http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/) archives 
but it seems only old (<= 2013 perhaps) messages are searchable with the 
"Search" box. Is it possible to re-index the searchable DB to include 
recent/current messages?


Thanks,

Cameron

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Is there aceiling of lustre filesystem a client can mount

2020-07-15 Thread 肖正刚
  Hi, Jongwoo &  Andreas

Sorry for the ambiguous description.
What I want to know is the number of lustre filesystems that a client can
mount on the same time.

Thanks



>
>
> Message: 1
> Date: Wed, 15 Jul 2020 14:29:10 +0800
> From: ??? 
> To: lustre-discuss@lists.lustre.org
> Subject: [lustre-discuss] Is there aceiling of lustre filesystem a
> client can mount
> Message-ID:
>  hxr...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi, all
> Is there a ceiling for a Lustre filesystem that can be mounted in a
> cluster?
> If so, what's the number?
> If not, how much is proper?
> Does mount multiple filesystems  can affect the stability of each file
> system or cause other problems?
>
> Thanks!
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20200715/e57d1c6f/attachment-0001.html
> >
>
> --
>
>
> Message: 3
> Date: Wed, 15 Jul 2020 23:45:57 +0900
> From: Jongwoo Han 
> To: ??? 
> Cc: lustre-discuss 
> Subject: Re: [lustre-discuss] Is there aceiling of lustre filesystem a
> client can mount
> Message-ID:
>  kgfbw9qrommea3xscmy33l...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> I think your question is ambiguous.
>
> What ceiling do you mean? Total storage capacity? number of disks? number
> of clients? number of filesystems?
>
> Please be more clear about it.
>
> Regards,
> Jongwoo Han
>
> 2020? 7? 15? (?) ?? 3:29, ??? ?? ??:
>
> > Hi, all
> > Is there a ceiling for a Lustre filesystem that can be mounted in a
> > cluster?
> > If so, what's the number?
> > If not, how much is proper?
> > Does mount multiple filesystems  can affect the stability of each file
> > system or cause other problems?
> >
> > Thanks!
> > ___
> > lustre-discuss mailing list
> > lustre-discuss@lists.lustre.org
> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> >
>
>
> --
> Jongwoo Han
> +82-505-227-6108
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20200715/3329f03d/attachment-0001.html
> >
>
> --
>
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Is there aceiling of lustre filesystem a client can mount

2020-07-15 Thread Andreas Dilger
On Jul 15, 2020, at 12:29 AM, 肖正刚  wrote:
> 
> Hi, all
> Is there a ceiling for a Lustre filesystem that can be mounted in a cluster?
> If so, what's the number?
> If not, how much is proper?
> Does mount multiple filesystems  can affect the stability of each file system 
> or cause other problems?

Depending on what limits you are looking for, you may find this link useful:

https://build.whamcloud.com/job/lustre-manual//lastSuccessfulBuild/artifact/lustre_manual.xhtml#settinguplustresystem.tab2

For capacity and performance, the upper limits are probably "higher than what 
you have money for". :-)

Cheers, Andreas







signature.asc
Description: Message signed with OpenPGP
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Can we re-index the lustre-discuss archive DB?

2020-07-15 Thread Andreas Dilger
On Jul 15, 2020, at 6:07 PM, Cameron Harr  wrote:
> 
> To the person with the power,
> 
> I've been trying to search the lustre-discuss 
> (http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/) archives but 
> it seems only old (<= 2013 perhaps) messages are searchable with the "Search" 
> box. Is it possible to re-index the searchable DB to include recent/current 
> messages?

Cameron, it looks like there is a full archive at:

https://marc.info/?l=lustre-discuss

Cheers, Andreas







signature.asc
Description: Message signed with OpenPGP
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] [EXTERNAL] Re: oss servers crashing

2020-07-15 Thread Kurt Strosahl
This is the trace up to the LBUG

[5533797.889690] Lustre: Skipped 341 previous similar messages
[5533958.749284] LustreError: 105499:0:(tgt_grant.c:571:tgt_grant_incoming()) 
lustre19-OST002c: cli 901dcd33-cf45-dad4-a0c7-89b9a1fb91b6/99656aa5a800 
dirty 0 pend 0 grant -1310720
[5533958.754365] LustreError: 105499:0:(tgt_grant.c:573:tgt_grant_incoming()) 
LBUG
[5533958.756929] Pid: 105499, comm: ll_ost_io01_071 
3.10.0-957.10.1.el7_lustre.x86_64 #1 SMP Tue Apr 30 22:18:15 UTC 2019
[5533958.756931] Call Trace:
[5533958.756948]  [] libcfs_call_trace+0x8c/0xc0 [libcfs]


From: Alex Zarochentsev 
Sent: Wednesday, July 15, 2020 11:20 AM
To: Kurt Strosahl 
Cc: lustre-discuss@lists.lustre.org 
Subject: [EXTERNAL] Re: [lustre-discuss] oss servers crashing

Hello!

On Wed, Jul 15, 2020 at 5:28 PM Kurt Strosahl 
mailto:stros...@jlab.org>> wrote:
Good Morning,

   Yesterday one of our lustre file servers rebooted several times.  the crash 
dump showed:

can you please provide a failed lustre assert message just above the kernel 
panic message ?

Thanks,
Zam.


[14333982.153989] Pid: 381367, comm: ll_ost_io01_076 
3.10.0-957.10.1.el7_lustre.x86_64 #1 SMP Tue Apr 30 22:18:15 UTC 2019
[14333982.153989] Kernel panic - not syncing: LBUG
[14333982.153990] Call Trace:
[14333982.153993] CPU: 4 PID: 380760 Comm: ll_ost_io01_072 Kdump: loaded 
Tainted: P   OE     3.10.0-957.10.1.el7_lustre.x86_64 #1
[14333982.153994] Hardware name: Supermicro Super Server/X11DPL-i, BIOS 3.1 
05/21/2019
[14333982.153995] Call Trace:
[14333982.154002]  [] dump_stack+0x19/0x1b
[14333982.154006]  [] panic+0xe8/0x21f
[14333982.154018]  [] libcfs_call_trace+0x8c/0xc0 [libcfs]
[14333982.154026]  [] lbug_with_loc+0x9b/0xa0 [libcfs]
[14333982.154036]  [] lbug_with_loc+0x4c/0xa0 [libcfs]
[14333982.154096]  [] tgt_grant_incoming.isra.6+0x570/0x570 
[ptlrpc]
[14333982.154174]  [] tgt_grant_prepare_read+0x0/0x3b0 
[ptlrpc]
[14333982.154232]  [] tgt_grant_prepare_read+0x10b/0x3b0 
[ptlrpc]
[14333982.154297]  [] tgt_grant_prepare_read+0x10b/0x3b0 
[ptlrpc]
[14333982.154306]  [] ofd_preprw+0x450/0x1160 [ofd]

lustre versions:
lustre-resource-agents-2.12.1-1.el7.x86_64
lustre-2.12.1-1.el7.x86_64
kernel-devel-3.10.0-957.10.1.el7_lustre.x86_64
lustre-osd-zfs-mount-2.12.1-1.el7.x86_64
kernel-headers-3.10.0-957.10.1.el7_lustre.x86_64
kernel-3.10.0-957.10.1.el7_lustre.x86_64
lustre-zfs-dkms-2.12.1-1.el7.noarch

Could this be: 
https://jira.whamcloud.com/browse/LU-12120

w/r,

Kurt J. Strosahl
System Administrator: Lustre, HPC
Scientific Computing Group, Thomas Jefferson National Accelerator Facility

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] oss servers crashing

2020-07-15 Thread Alex Zarochentsev
Hello!

On Wed, Jul 15, 2020 at 5:28 PM Kurt Strosahl  wrote:

> Good Morning,
>
>Yesterday one of our lustre file servers rebooted several times.  the
> crash dump showed:
>

can you please provide a failed lustre assert message just above the kernel
panic message ?

Thanks,
Zam.


> [14333982.153989] Pid: 381367, comm: ll_ost_io01_076
> 3.10.0-957.10.1.el7_lustre.x86_64 #1 SMP Tue Apr 30 22:18:15 UTC 2019
> [14333982.153989] Kernel panic - not syncing: LBUG
> [14333982.153990] Call Trace:
> [14333982.153993] CPU: 4 PID: 380760 Comm: ll_ost_io01_072 Kdump: loaded
> Tainted: P   OE     3.10.0-957.10.1.el7_lustre.x86_64 #1
> [14333982.153994] Hardware name: Supermicro Super Server/X11DPL-i, BIOS
> 3.1 05/21/2019
> [14333982.153995] Call Trace:
> [14333982.154002]  [] dump_stack+0x19/0x1b
> [14333982.154006]  [] panic+0xe8/0x21f
> [14333982.154018]  [] libcfs_call_trace+0x8c/0xc0
> [libcfs]
> [14333982.154026]  [] lbug_with_loc+0x9b/0xa0 [libcfs]
> [14333982.154036]  [] lbug_with_loc+0x4c/0xa0 [libcfs]
> [14333982.154096]  []
> tgt_grant_incoming.isra.6+0x570/0x570 [ptlrpc]
> [14333982.154174]  [] tgt_grant_prepare_read+0x0/0x3b0
> [ptlrpc]
> [14333982.154232]  [] tgt_grant_prepare_read+0x10b/0x3b0
> [ptlrpc]
> [14333982.154297]  [] tgt_grant_prepare_read+0x10b/0x3b0
> [ptlrpc]
> [14333982.154306]  [] ofd_preprw+0x450/0x1160 [ofd]
>
> lustre versions:
> lustre-resource-agents-2.12.1-1.el7.x86_64
> lustre-2.12.1-1.el7.x86_64
> kernel-devel-3.10.0-957.10.1.el7_lustre.x86_64
> lustre-osd-zfs-mount-2.12.1-1.el7.x86_64
> kernel-headers-3.10.0-957.10.1.el7_lustre.x86_64
> kernel-3.10.0-957.10.1.el7_lustre.x86_64
> lustre-zfs-dkms-2.12.1-1.el7.noarch
>
> Could this be: https://jira.whamcloud.com/browse/LU-12120
>
> w/r,
>
> Kurt J. Strosahl
> System Administrator: Lustre, HPC
> Scientific Computing Group, Thomas Jefferson National Accelerator Facility
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Is there aceiling of lustre filesystem a client can mount

2020-07-15 Thread Jongwoo Han
I think your question is ambiguous.

What ceiling do you mean? Total storage capacity? number of disks? number
of clients? number of filesystems?

Please be more clear about it.

Regards,
Jongwoo Han

2020년 7월 15일 (수) 오후 3:29, 肖正刚 님이 작성:

> Hi, all
> Is there a ceiling for a Lustre filesystem that can be mounted in a
> cluster?
> If so, what's the number?
> If not, how much is proper?
> Does mount multiple filesystems  can affect the stability of each file
> system or cause other problems?
>
> Thanks!
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>


-- 
Jongwoo Han
+82-505-227-6108
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] oss servers crashing

2020-07-15 Thread Kurt Strosahl
Good Morning,

   Yesterday one of our lustre file servers rebooted several times.  the crash 
dump showed:

[14333982.153989] Pid: 381367, comm: ll_ost_io01_076 
3.10.0-957.10.1.el7_lustre.x86_64 #1 SMP Tue Apr 30 22:18:15 UTC 2019
[14333982.153989] Kernel panic - not syncing: LBUG
[14333982.153990] Call Trace:
[14333982.153993] CPU: 4 PID: 380760 Comm: ll_ost_io01_072 Kdump: loaded 
Tainted: P   OE     3.10.0-957.10.1.el7_lustre.x86_64 #1
[14333982.153994] Hardware name: Supermicro Super Server/X11DPL-i, BIOS 3.1 
05/21/2019
[14333982.153995] Call Trace:
[14333982.154002]  [] dump_stack+0x19/0x1b
[14333982.154006]  [] panic+0xe8/0x21f
[14333982.154018]  [] libcfs_call_trace+0x8c/0xc0 [libcfs]
[14333982.154026]  [] lbug_with_loc+0x9b/0xa0 [libcfs]
[14333982.154036]  [] lbug_with_loc+0x4c/0xa0 [libcfs]
[14333982.154096]  [] tgt_grant_incoming.isra.6+0x570/0x570 
[ptlrpc]
[14333982.154174]  [] tgt_grant_prepare_read+0x0/0x3b0 
[ptlrpc]
[14333982.154232]  [] tgt_grant_prepare_read+0x10b/0x3b0 
[ptlrpc]
[14333982.154297]  [] tgt_grant_prepare_read+0x10b/0x3b0 
[ptlrpc]
[14333982.154306]  [] ofd_preprw+0x450/0x1160 [ofd]

lustre versions:
lustre-resource-agents-2.12.1-1.el7.x86_64
lustre-2.12.1-1.el7.x86_64
kernel-devel-3.10.0-957.10.1.el7_lustre.x86_64
lustre-osd-zfs-mount-2.12.1-1.el7.x86_64
kernel-headers-3.10.0-957.10.1.el7_lustre.x86_64
kernel-3.10.0-957.10.1.el7_lustre.x86_64
lustre-zfs-dkms-2.12.1-1.el7.noarch

Could this be: https://jira.whamcloud.com/browse/LU-12120

w/r,

Kurt J. Strosahl
System Administrator: Lustre, HPC
Scientific Computing Group, Thomas Jefferson National Accelerator Facility
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org