Re: [ceph-users] issue adding OSDs

2018-01-12 Thread Luis Periquito
"ceph versions" returned all daemons as running 12.2.1. On Fri, Jan 12, 2018 at 8:00 AM, Janne Johansson wrote: > Running "ceph mon versions" and "ceph osd versions" and so on as you do the > upgrades would have helped I guess. > > > 2018-01-11 17:28 GMT+01:00 Luis Periquito

Re: [ceph-users] issue adding OSDs

2018-01-12 Thread Janne Johansson
Running "ceph mon versions" and "ceph osd versions" and so on as you do the upgrades would have helped I guess. 2018-01-11 17:28 GMT+01:00 Luis Periquito : > this was a bit weird, but is now working... Writing for future > reference if someone faces the same issue. > > this

Re: [ceph-users] issue adding OSDs

2018-01-11 Thread Luis Periquito
this was a bit weird, but is now working... Writing for future reference if someone faces the same issue. this cluster was upgraded from jewel to luminous following the recommended process. When it was finished I just set the require_osd to luminous. However I hadn't restarted the daemons since.

[ceph-users] issue adding OSDs

2018-01-10 Thread Luis Periquito
Hi, I'm running a cluster with 12.2.1 and adding more OSDs to it. Everything is running version 12.2.1 and require_osd is set to luminous. one of the pools is replicated with size 2 min_size 1, and is seemingly blocking IO while recovering. I have no slow requests, looking at the output of "ceph