[ceph-users] restoring ceph cluster from osds

2023-03-07 Thread Ben
Hi,

I ended up with having whole set of osds to get back original ceph cluster.
I figured out to make the cluster running. However, it's status is
something as below:

bash-4.4$ ceph -s

  cluster:

id: 3f271841-6188-47c1-b3fd-90fd4f978c76

health: HEALTH_WARN

7 daemons have recently crashed

4 slow ops, oldest one blocked for 35077 sec, daemons
[mon.a,mon.b] have slow ops.



  services:

mon: 3 daemons, quorum a,b,d (age 9h)

mgr: b(active, since 14h), standbys: a

osd: 4 osds: 0 up, 4 in (since 9h)



  data:

pools:   0 pools, 0 pgs

objects: 0 objects, 0 B

usage:   0 B used, 0 B / 0 B avail

pgs:


All osds are down.


I checked the osds logs and attached with this.


Please help and I wonder if it's possible to get the cluster back. I have
some backup for monitor's data. Till now I haven't restore that in the
course.


Thanks,

Ben
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] restoring ceph cluster from osds

2023-03-04 Thread Ben
Hi,

I ended up with having whole set of osds to get back original ceph cluster.
I figured out to make the cluster running. However, it's status is
something as below:

bash-4.4$ ceph -s

  cluster:

id: 3f271841-6188-47c1-b3fd-90fd4f978c76

health: HEALTH_WARN

7 daemons have recently crashed

4 slow ops, oldest one blocked for 35077 sec, daemons
[mon.a,mon.b] have slow ops.



  services:

mon: 3 daemons, quorum a,b,d (age 9h)

mgr: b(active, since 14h), standbys: a

osd: 4 osds: 0 up, 4 in (since 9h)



  data:

pools:   0 pools, 0 pgs

objects: 0 objects, 0 B

usage:   0 B used, 0 B / 0 B avail

pgs:


All osds are down.


I checked the osds logs following this.


Please help and I wonder if it's possible to get the cluster back. I have
some backup for monitor's data. Until now I haven't restored that in the
course.


Thanks,

Ben



osd0 log:

debug 2023-03-04T04:41:50.620+ 7f824d7043c0  0 set uid:gid to 167:167
(ceph:ceph)

debug 2023-03-04T04:41:50.620+ 7f824d7043c0  0 ceph version 17.2.5
(98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable), process
ceph-osd, pid 1

debug 2023-03-04T04:41:50.620+ 7f824d7043c0  0 pidfile_write: ignore
empty --pid-file

debug 2023-03-04T04:41:50.622+ 7f824d7043c0  1 bdev(0x5570b19cd400
/var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block

debug 2023-03-04T04:41:50.622+ 7f824d7043c0  1 bdev(0x5570b19cd400
/var/lib/ceph/osd/ceph-0/block) open size 107374182400 (0x19, 100
GiB) block_size 4096 (4 KiB) non-rotational discard supported

debug 2023-03-04T04:41:50.622+ 7f824d7043c0  1
bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 3221225472
meta 0.45 kv 0.45 data 0.06

debug 2023-03-04T04:41:50.622+ 7f824d7043c0  1 bdev(0x5570b19ccc00
/var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block

debug 2023-03-04T04:41:50.622+ 7f824d7043c0  1 bdev(0x5570b19ccc00
/var/lib/ceph/osd/ceph-0/block) open size 107374182400 (0x19, 100
GiB) block_size 4096 (4 KiB) non-rotational discard supported

debug 2023-03-04T04:41:50.622+ 7f824d7043c0  1 bluefs add_block_device
bdev 1 path /var/lib/ceph/osd/ceph-0/block size 100 GiB

debug 2023-03-04T04:41:50.622+ 7f824d7043c0  1 bdev(0x5570b19ccc00
/var/lib/ceph/osd/ceph-0/block) close

debug 2023-03-04T04:41:50.934+ 7f824d7043c0  1 bdev(0x5570b19cd400
/var/lib/ceph/osd/ceph-0/block) close

debug 2023-03-04T04:41:51.481+ 7f824d7043c0  0 starting osd.0 osd_data
/var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal

debug 2023-03-04T04:41:51.482+ 7f824d7043c0 -1 Falling back to public
interface

debug 2023-03-04T04:41:51.492+ 7f824d7043c0  0 load: jerasure load: lrc

debug 2023-03-04T04:41:51.493+ 7f824d7043c0  1 bdev(0x5570b27f2000
/var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block

debug 2023-03-04T04:41:51.493+ 7f824d7043c0  1 bdev(0x5570b27f2000
/var/lib/ceph/osd/ceph-0/block) open size 107374182400 (0x19, 100
GiB) block_size 4096 (4 KiB) non-rotational discard supported

debug 2023-03-04T04:41:51.493+ 7f824d7043c0  1
bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 3221225472
meta 0.45 kv 0.45 data 0.06

debug 2023-03-04T04:41:51.493+ 7f824d7043c0  1 bdev(0x5570b27f2000
/var/lib/ceph/osd/ceph-0/block) close

debug 2023-03-04T04:41:52.024+ 7f824d7043c0  1 bdev(0x5570b27f2000
/var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block

debug 2023-03-04T04:41:52.025+ 7f824d7043c0  1 bdev(0x5570b27f2000
/var/lib/ceph/osd/ceph-0/block) open size 107374182400 (0x19, 100
GiB) block_size 4096 (4 KiB) non-rotational discard supported

debug 2023-03-04T04:41:52.025+ 7f824d7043c0  1
bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 3221225472
meta 0.45 kv 0.45 data 0.06

debug 2023-03-04T04:41:52.025+ 7f824d7043c0  1 bdev(0x5570b27f2000
/var/lib/ceph/osd/ceph-0/block) close

debug 2023-03-04T04:41:52.051+ 7f824d7043c0  1 mClockScheduler:
set_max_osd_capacity #op shards: 8 max osd capacity(iops) per shard: 464.18

debug 2023-03-04T04:41:52.051+ 7f824d7043c0  1 mClockScheduler:
set_osd_mclock_cost_per_io osd_mclock_cost_per_io: 0.050

debug 2023-03-04T04:41:52.051+ 7f824d7043c0  1 mClockScheduler:
set_osd_mclock_cost_per_byte osd_mclock_cost_per_byte: 0.110

debug 2023-03-04T04:41:52.051+ 7f824d7043c0  1 mClockScheduler:
set_mclock_profile mclock profile: high_client_ops

debug 2023-03-04T04:41:52.052+ 7f824d7043c0  0 osd.0:0.OSDShard using
op scheduler mClockScheduler

debug 2023-03-04T04:41:52.052+ 7f824d7043c0  1 bdev(0x5570b27f2000
/var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block

debug 2023-03-04T04:41:52.052+ 7f824d7043c0  1 bdev(0x5570b27f2000
/var/lib/ceph/osd/ceph-0/block) open size 107374182400 (0x19, 100
GiB) block_size 4096 (4 KiB) non-rotational discard supported

debug