Re: [ceph-users] Blocked requests activating+remapped afterextendingpg(p)_num

2018-05-17 Thread Kevin Olbrich
Hi! @Paul Thanks! I know, I read the whole topic about size 2 some months ago. But this has not been my decision, I had to set it up like that. In the meantime, I did a reboot of node1001 and node1002 with flag "noout" set and now peering has finished and only 0.0x% are rebalanced. IO is flowing

Re: [ceph-users] Blocked requests activating+remapped afterextendingpg(p)_num

2018-05-17 Thread Paul Emmerich
Check ceph pg query, it will (usually) tell you why something is stuck inactive. Also: never do min_size 1. Paul 2018-05-17 15:48 GMT+02:00 Kevin Olbrich : > I was able to obtain another NVMe to get the HDDs in node1004 into the > cluster. > The number of disks (all 1TB) is now balanced betwe

Re: [ceph-users] Blocked requests activating+remapped afterextendingpg(p)_num

2018-05-17 Thread Kevin Olbrich
I was able to obtain another NVMe to get the HDDs in node1004 into the cluster. The number of disks (all 1TB) is now balanced between racks, still some inactive PGs: data: pools: 2 pools, 1536 pgs objects: 639k objects, 2554 GB usage: 5167 GB used, 14133 GB / 19300 GB avail p

Re: [ceph-users] Blocked requests activating+remapped afterextendingpg(p)_num

2018-05-17 Thread Kevin Olbrich
Ok, I just waited some time but I still got some "activating" issues: data: pools: 2 pools, 1536 pgs objects: 639k objects, 2554 GB usage: 5194 GB used, 11312 GB / 16506 GB avail pgs: 7.943% pgs not active 5567/1309948 objects degraded (0.425%) 1

Re: [ceph-users] Blocked requests activating+remapped afterextendingpg(p)_num

2018-05-17 Thread Kevin Olbrich
PS: Cluster currently is size 2, I used PGCalc on Ceph website which, by default, will place 200 PGs on each OSD. I read about the protection in the docs and later noticed that I better had only placed 100 PGs. 2018-05-17 13:35 GMT+02:00 Kevin Olbrich : > Hi! > > Thanks for your quick reply. > B

Re: [ceph-users] Blocked requests activating+remapped afterextendingpg(p)_num

2018-05-17 Thread Kevin Olbrich
Hi! Thanks for your quick reply. Before I read your mail, i applied the following conf to my OSDs: ceph tell 'osd.*' injectargs '--osd_max_pg_per_osd_hard_ratio 32' Status is now: data: pools: 2 pools, 1536 pgs objects: 639k objects, 2554 GB usage: 5211 GB used, 11295 GB / 16506

Re: [ceph-users] Blocked requests activating+remapped afterextendingpg(p)_num

2018-05-17 Thread Burkhard Linke
Hi, On 05/17/2018 01:09 PM, Kevin Olbrich wrote: Hi! Today I added some new OSDs (nearly doubled) to my luminous cluster. I then changed pg(p)_num from 256 to 1024 for that pool because it was complaining about to few PGs. (I noticed that should better have been small changes). This is the cu