Hello!
Luminous v 12.1.2
Rgw ssd tiering over EC pool works fine.
But I want to change type of erasure (now and in the future).
Type of erasure code is not allowed for on-fly changing
Only new pool with new coding
First Idea was to add second tiering level
Ssd - EC - ISA and to evict all down,
Hello, Roger!
ceph pg dump
ceph pg deep-scrub
or just kick them all
ceph pg dump | grep -i active+clean | awk '{print $1}' | while read i; do ceph
pg deep-scrub ${i}; done
--
Petr Malkov
-
Message: 57
Date: Wed, 19 Jul 2017 16:38:20 +
From: Roger Brown
v12.1.0 Luminous RC released
BlueStore:
The new BlueStore backend for ceph-osd is now stable and the new
default for newly created OSDs.
[global]
fsid = a737f8ad-b959-4d44-ada7-2ed6a2b8802b
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.148.189,192.168.148.5,192.168.148.43
Rgw hammer -> jewel
Next method helped me
After upgrading newly remake rgw on jewel
ceph auth del client.rgw.ceph403
rm /var/lib/ceph/radosgw/ceph-rgw.ceph403/
ceph-deploy --overwrite-conf rgw create ceph403
systemctl stop ceph-radosgw.target
systemctl start ceph-radosgw.target
systemctl status
Hi all!
2 clusters: jewel vs kraken
What is the best (not best, but working) way to migrate jewel rgw.pool.data ->
kraken rgw.pool.data ?
if not touching jewel cluster to be upgraded
--
Petr Malkov
___
ceph-users mailing list
Hello!
I'am looking for method to make rgw ssd-cache tier with hdd
https://blog-fromsomedude.rhcloud.com/2015/11/06/Ceph-RadosGW-Placement-Targets/
I successfully created rgw pools for ssd as described above
And Placement Targets are written to users profile
So data can be written to any hdd