And we found this when active mds start booting.
conf: 
[mds]
debug_mds = 0/20
debug_mds_balancer = 1

debug 2023-02-16T10:25:15.393+0000 7fd58cbc6780  0 set uid:gid to 167:167 
(ceph:ceph)
debug 2023-02-16T10:25:15.393+0000 7fd58cbc6780  0 ceph version 16.2.4 
(3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable), process ceph-mds, 
pid 1
debug 2023-02-16T10:25:15.395+0000 7fd58cbc6780  0 pidfile_write: ignore empty 
--pid-file
starting mds.gml-okd-cephfs-a at
debug 2023-02-16T10:28:02.642+0000 7fd575aef700  0 mds.0.journaler.pq(ro) 
_finish_read got error -2
debug 2023-02-16T10:28:02.642+0000 7fd575aef700 -1 mds.0.purge_queue _recover: 
Error -2 recovering write_pos
debug 2023-02-16T10:28:02.671+0000 7fd575aef700 -1 mds.0.350650 unhandled write 
error (2) No such file or directory, force readonly...
debug 2023-02-16T10:28:02.671+0000 7fd575aef700  0 log_channel(cluster) log 
[WRN] : force file system read-only
debug 2023-02-16T10:28:02.671+0000 7fd575aef700  0 mds.0.journaler.pq(ro) 
_finish_read got error -2
debug 2023-02-16T10:28:02.671+0000 7fd5742ec700  0 mds.0.cache creating system 
inode with ino:0x100
debug 2023-02-16T10:28:02.672+0000 7fd5742ec700  0 mds.0.cache creating system 
inode with ino:0x1
debug 2023-02-16T10:28:02.780+0000 7fd5732ea700  0 mds.0.350650 boot error 
forcing transition to read-only; MDS will try to continue
debug 2023-02-16T10:28:08.265+0000 7fd5782f4700 -1 mds.pinger is_rank_lagging: 
rank=0 was never sent ping request.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to