[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread Denny Fuchs
Hi, I overseen also this: == root@fc-r02-ceph-osd-01:[~]: ceph -s cluster: id: cfca8c93-f3be-4b86-b9cb-8da095ca2c26 health: HEALTH_OK services: mon: 5 daemons, quorum

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread Denny Fuchs
hi, I've forget to write the command, I've used: = ceph osd crush move fc-r02-ceph-osd-01 root=default ceph osd crush move fc-r02-ceph-osd-01 root=default ... = and I've found also this param: === root@fc-r02-ceph-osd-01:[~]: ceph osd crush tree --show-shadow ID CLASS

[ceph-users] 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread Denny Fuchs
Hello, we upgraded to Quincy and tried to remove an obsolete part: In the beginning of Ceph, there where no device classes and we created rules, to split them into hdd and ssd on one of our datacenters. https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/

[ceph-users] Drop old SDD / HDD Host crushmap rules

2021-06-16 Thread Denny Fuchs
Hello, i have from the beginning on one DC very old crush map rules, to split HDD and SSD disks. It is obsolete since Luminous and I want to drop them: # ceph osd crush rule ls replicated_rule fc-r02-ssdpool fc-r02-satapool fc-r02-ssd = [ { "rule_id": 0,