[ceph-users] Re: Ceph OSD Node Maintenance Question

2020-08-17 Thread Matt Dunavant
Hello all, Thanks for the help. I believe we traced this down to be an issue with the crush rules. It seems somehow osd_crush_chooseleaf_type = 0 got placed into our configuration. This caused ceph osd crush rule dump to include this line ' "op": "choose_firstn",' instead of 'chooseleaf_firstn

[ceph-users] Re: Ceph OSD Node Maintenance Question

2020-08-15 Thread Lindsay Mathieson
Did you check the ceph status? ("ceph -s") On 16/08/2020 1:47 am, Matt Dunavant wrote: Hi all, We just completed maintenance on an OSD node and we ran into an issue where all data seemed to stop flowing while the node was down. We couldn't connect to any of our VMs during that time. I was und

[ceph-users] Re: Ceph OSD Node Maintenance Question

2020-08-15 Thread William Edwards
Do you mean I/O stopped on your VMs? Sent from mobile > Op 15 aug. 2020 om 17:48 heeft Matt Dunavant > het volgende geschreven: > > Hi all, > > We just completed maintenance on an OSD node and we ran into an issue where > all data seemed to stop flowing while the node was down. We couldn't

[ceph-users] Re: Ceph OSD Node Maintenance Question

2020-08-15 Thread Eugen Block
What are size and min_size for that pool? Zitat von Matt Dunavant : Yeah, the VMs didn't die completely but they were all inaccessible during the maintenance period. Once the maintenance node came back up, it started flowing again. ___ ceph-users

[ceph-users] Re: Ceph OSD Node Maintenance Question

2020-08-15 Thread Matt Dunavant
Yeah, the VMs didn't die completely but they were all inaccessible during the maintenance period. Once the maintenance node came back up, it started flowing again. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-us