?Ok, I'll try these params. thx!
От: Maged Mokhtar
Отправлено: 12 декабря 2018 г. 10:51
Кому: Klimenko, Roman; ceph-users@lists.ceph.com
Тема: Re: [ceph-users] ceph pg backfill_toofull
There are 2 relevant params
mon_osd_full_ratio
Hi everyone. Yesterday i found that on our overcrowded Hammer ceph cluster (83%
used in HDD pool) several osds were in danger zone - near 95%.
I reweighted them, and after several moments I got pgs stuck in
backfill_toofull.
After that, I reapplied reweight to osds - no luck.
Currently, all re
Hi everyone!
On the old prod cluster
- baremetal, 5 nodes (24 cpu, 256G RAM)
- ceph 0.80.9 filestore
- 105 osd, size 114TB (each osd 1.1T, SAS Seagate ST1200MM0018) , raw used 60%
- 15 journals (eash journal 0.4TB, Toshiba PX04SMB040)
- net 20Gbps
- 5 pools, size 2, min_size 1
we have dis
Ok, thx, I'll try ceph-disk.
От: Alfredo Deza
Отправлено: 15 ноября 2018 г. 20:16
Кому: Klimenko, Roman
Копия: ceph-users@lists.ceph.com
Тема: Re: [ceph-users] Migration osds to Bluestore on Ubuntu 14.04 Trusty
On Thu, Nov 15, 2018 at 8:57 AM Kli
Hi everyone!
As I noticed, ceph-volume lacks Ubuntu Trusty compatibility
https://tracker.ceph.com/issues/23496
So, I can't follow this instruction
http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/
Do I have any other option to migrate my Filestore osds (Luminous 12.2.9) t
HI all.
Im trying to deploy openstack with ceph kraken bluestore osds.
Deploy went well, but then i execute ceph osd tree i can see wrong weight on
bluestore disks.
ceph osd tree | tail
-3 0.91849 host krk-str02
23 0.00980 osd.23 up 1.0 1.0
24 0.90869