[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread David C.
Hi, It seems to me that before removing buckets from the crushmap, it is necessary to do the migration first. I think you should restore the initial crushmap by adding the default root next to it and only then do the migration. There should be some backfill (probably a lot). __

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread David C.
I've probably answered too quickly if the migration is complete and there are no incidents. Are the pg active+clean? Cordialement, *David CASIER* Le mer. 8 nov. 2023 à 11:50, Dav

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread Denny Fuchs
hi, I've forget to write the command, I've used: = ceph osd crush move fc-r02-ceph-osd-01 root=default ceph osd crush move fc-r02-ceph-osd-01 root=default ... = and I've found also this param: === root@fc-r02-ceph-osd-01:[~]: ceph osd crush tree --show-shadow ID CLASS WEIGHT

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread Denny Fuchs
Hi, I overseen also this: == root@fc-r02-ceph-osd-01:[~]: ceph -s cluster: id: cfca8c93-f3be-4b86-b9cb-8da095ca2c26 health: HEALTH_OK services: mon: 5 daemons, quorum fc-r02-ceph-osd-01,fc-r02-ceph-osd-02,fc-r02-ceph-osd-03,fc-r02-ceph-osd-05

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread David C.
so the next step is to place the pools on the right rule : ceph osd pool set db-pool crush_rule fc-r02-ssd Le mer. 8 nov. 2023 à 12:04, Denny Fuchs a écrit : > hi, > > I've forget to write the command, I've used: > > = > ceph osd crush move fc-r02-ceph-osd-01 root=default > ceph osd crush