[ceph-users] Re: Is it normal for a orch osd rm drain to take so long?

2021-12-02 Thread Zach Heise (SSCC)
Can do ceph -s:   cluster:     id:     health: HEALTH_OK     services:     mon: 4 daemons, quorum ceph05,ceph04,ceph01,ceph03 (age 4d)     mgr: ceph01.fblojp(active, since 25h), standbys: ceph03.futetp     mds: 1/1 daemons up, 1 standby     osd: 32 osds: 32 up (since 9d), 31 in

[ceph-users] Re: Is it normal for a orch osd rm drain to take so long?

2021-12-02 Thread David Orman
Hi, It would be good to have the full output. Does iostat show the backing device performing I/O? Additionally, what does ceph -s show for cluster state? Also, can you check the logs on that OSD, and see if anything looks abnormal? David On Thu, Dec 2, 2021 at 1:20 PM Zach Heise (SSCC) wrote:

[ceph-users] Re: Is it normal for a orch osd rm drain to take so long?

2021-12-02 Thread Zach Heise (SSCC)
Good morning David, Assuming you need/want to see the data about the other 31 OSDs, 14 is showing: ID CLASS WEIGHT REWEIGHT SIZE RAW USE

[ceph-users] Re: Is it normal for a orch osd rm drain to take so long?

2021-12-01 Thread David Orman
What's "ceph osd df" show? On Wed, Dec 1, 2021 at 2:20 PM Zach Heise (SSCC) wrote: > I wanted to swap out on existing OSD, preserve the number, and then remove > the HDD that had it (osd.14 in this case) and give the ID of 14 to a new > SSD that would be taking its place in the same node. First