And a perhaps minor observation:

Comparing to previous restarts in the log files, I see the line
Lustre: MGS: Connection restored to 2519f316-4f30-9698-3487-70eb31a73320 (at 
0@lo)

Before, it was
Lustre: MGS: Connection restored to c70c1b4e-3517-5631-28b1-7163f13e7bed (at 
0@lo)

What is this number? A unique identifier for the MGS? Which changes between 
restarts?


Regards,
Thomas


On 11/03/2021 17.47, Thomas Roth via lustre-discuss wrote:
Hi all,

after not getting out of the ldlm_lockd - situation, we are trying a shutdown 
plus restart.
Does not work at all, the very first mount of the restart is MGS + MDT0, of 
course.

It is quite busy writing traces to the log


Mar 11 17:21:17 lxmds19.gsi.de kernel: INFO: task mount.lustre:2948 blocked for 
more than 120 seconds.
Mar 11 17:21:17 lxmds19.gsi.de kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 17:21:17 lxmds19.gsi.de kernel: mount.lustre    D ffff9616ffc5acc0     0 
 2948   2947 0x00000082
Mar 11 17:21:17 lxmds19.gsi.de kernel: Call Trace:
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a785da9>] schedule+0x29/0x70
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a7838b1>] 
schedule_timeout+0x221/0x2d0
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a0e17f6>] ? 
select_task_rq_fair+0x5a6/0x760
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a78615d>] 
wait_for_completion+0xfd/0x140
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a0db990>] ? 
wake_up_state+0x20/0x20
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0b7c9a4>] 
llog_process_or_fork+0x244/0x450 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0b7cbc4>] 
llog_process+0x14/0x20 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bafd05>] class_config_parse_llog+0x125/0x350 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc077efc0>] 
mgc_process_cfg_log+0x790/0xc40 [mgc]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc07824cc>] 
mgc_process_log+0x3dc/0x8f0 [mgc]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc078315f>] ? 
config_recover_log_add+0x13f/0x280 [mgc]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bb7f40>] ? class_config_dump_handler+0x7e0/0x7e0 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0783b2b>] 
mgc_process_config+0x88b/0x13f0 [mgc]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bbbb58>] 
lustre_process_log+0x2d8/0xad0 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0a84177>] ? 
libcfs_debug_msg+0x57/0x80 [libcfs]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0ba68b9>] ? 
lprocfs_counter_add+0xf9/0x160 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bea8f4>] server_start_targets+0x13a4/0x2a20 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bbebb0>] ? 
lustre_start_mgc+0x260/0x2510 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bb7f40>] ? class_config_dump_handler+0x7e0/0x7e0 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bed03c>] 
server_fill_super+0x10cc/0x1890 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bc1a08>] 
lustre_fill_super+0x468/0x960 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bc15a0>] ? lustre_common_put_super+0x270/0x270 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a2510ff>] 
mount_nodev+0x4f/0xb0
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffffc0bb99a8>] 
lustre_mount+0x38/0x60 [obdclass]
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a251c7e>] mount_fs+0x3e/0x1b0
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a2707d7>] 
vfs_kern_mount+0x67/0x110
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a272f0f>] 
do_mount+0x1ef/0xd00
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a249daa>] ? 
__check_object_size+0x1ca/0x250
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a2288ec>] ? 
kmem_cache_alloc_trace+0x3c/0x200
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a273d63>] SyS_mount+0x83/0xd0
Mar 11 17:21:17 lxmds19.gsi.de kernel:  [<ffffffff8a792ed2>] 
system_call_fastpath+0x25/0x2a




Other than that, nothing is happening.

The Lustre processes have started, but e.g. recovery_status = Inactive.
OK, perhaps because there is nothing out there to recover besides this MDS, all other Lustre servers+clients are still stopped.


Still, on previous occasions the mount would not block in this way. The device would be mounted - now it does not make it into /proc/mounts

Btw, the disk device can be mounted as type ldiskfs. So it exists, and it looks definitely like a Lustre MDT on the inside.


Best,
Thomas


--
--------------------------------------------------------------------
Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291
Phone: +49-6159-71 1453  Fax: +49-6159-71 2986


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to