Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-26 Thread Konstantin Shalygin
On 1/26/19 10:24 PM, Kevin Olbrich wrote: I just had the time to check again: even after removing the broken OSD, mgr still crashes. All OSDs are on and in. If I run "ceph balancer on" on a HEALTH_OK cluster, an optimization plan is generated and started. After some minutes all MGRs die. This is

Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-26 Thread Kevin Olbrich
Hi! I just had the time to check again: even after removing the broken OSD, mgr still crashes. All OSDs are on and in. If I run "ceph balancer on" on a HEALTH_OK cluster, an optimization plan is generated and started. After some minutes all MGRs die. This is a major problem for me, as I still got

Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-05 Thread Konstantin Shalygin
On 1/5/19 4:17 PM, Kevin Olbrich wrote: root@adminnode:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 30.82903 root default -16 30.82903 datacenter dc01 -19 30.82903 pod dc01-agg01 -10 17.43365 rack dc

Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-05 Thread Kevin Olbrich
root@adminnode:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 30.82903 root default -16 30.82903 datacenter dc01 -19 30.82903 pod dc01-agg01 -10 17.43365 rack dc01-rack02 -47.20665

Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-04 Thread Konstantin Shalygin
On 1/5/19 1:51 AM, Kevin Olbrich wrote: PS: Could behttp://tracker.ceph.com/issues/36361 There is one HDD OSD that is out (which will not be replaced because the SSD pool will get the images and the hdd pool will be deleted). Paste your `ceph osd tree`, `ceph osd df tree` please. k

Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-04 Thread Kevin Olbrich
PS: Could be http://tracker.ceph.com/issues/36361 There is one HDD OSD that is out (which will not be replaced because the SSD pool will get the images and the hdd pool will be deleted). Kevin Am Fr., 4. Jan. 2019 um 19:46 Uhr schrieb Kevin Olbrich : > > Hi! > > I did what you wrote but my MGRs s

Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-04 Thread Kevin Olbrich
Hi! I did what you wrote but my MGRs started to crash again: root@adminnode:~# ceph -s cluster: id: 086d9f80-6249-4594-92d0-e31b6a9c health: HEALTH_WARN no active mgr 105498/6277782 objects misplaced (1.680%) services: mon: 3 daemons, quorum mon01,m

Re: [ceph-users] Usage of devices in SSD pool vary very much

2019-01-02 Thread Konstantin Shalygin
On a medium sized cluster with device-classes, I am experiencing a problem with the SSD pool: root at adminnode :~# ceph osd df | grep ssd ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS 2 ssd 0.43700 1.0 447GiB

[ceph-users] Usage of devices in SSD pool vary very much

2019-01-02 Thread Kevin Olbrich
Hi! On a medium sized cluster with device-classes, I am experiencing a problem with the SSD pool: root@adminnode:~# ceph osd df | grep ssd ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS 2 ssd 0.43700 1.0 447GiB 254GiB 193GiB 56.77 1.28 50 3 ssd 0.43700 1.0 4