[ceph-users] Re: objects misplaced jumps up at 5%

2020-09-29 Thread 胡 玮文
Hi, I’ve just read a post that describe the exact behavior you describe. https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/ There is a config option named target_max_misplaced_ratio, which defaults to 5%. You can change this to accelerate the remap process. Hopes that’s helpful.

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-29 Thread Frank Schilder
Somebody on this list posted a script that can convert pre-mimic crush trees with buckets for different types of devices to crush trees with device classes with minimal data movement (trying to maintain IDs as much as possible). Don't have a thread name right now, but could try to find it

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-29 Thread Frank Schilder
Are these crush maps inherited from pre-mimic versions? I have re-balanced SSD and HDD pools in mimic (mimic deployed) where one device class never influenced the placement of the other. I have mixed hosts and went as far as introducing rbd_meta, rbd_data and such classes to sub-divide even

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-29 Thread Marc Roos
Yes correct, hosts have indeed both ssd's and hdd's combined. Is this not more of a bug then? I would assume the goal of using device classes is that you separate these and one does not affect the other, even the host weight of the ssd and hdd class are already available. The algorithm

[ceph-users] Re: objects misplaced jumps up at 5%

2020-09-29 Thread Matt Larson
Continuing on this topic, is it only possible to increase the count of placement group (PG) quickly, but the associated placement group placeholder (PGP) values can only increase in smaller increments of 1-3? Each increase of the PGP requires a rebalancing and backfill again of lots of PGs? I am

[ceph-users] Re: Orchestrator cephadm not setting CRUSH weight on OSD

2020-09-29 Thread Eugen Block
Thank you for the update and the links, I appreciate it. Zitat von Robert Sander : On 29.09.20 13:29, Eugen Block wrote: did you find an explanation for this? There is a bug report open here: https://tracker.ceph.com/issues/45594 and a related one here:

[ceph-users] Re: Orchestrator cephadm not setting CRUSH weight on OSD

2020-09-29 Thread Robert Sander
On 29.09.20 13:29, Eugen Block wrote: > did you find an explanation for this? There is a bug report open here: https://tracker.ceph.com/issues/45594 and a related one here: https://tracker.ceph.com/issues/47366 It looks like the orchestrator somehow remembers the weight of 0 after removing the

[ceph-users] Re: Orchestrator cephadm not setting CRUSH weight on OSD

2020-09-29 Thread Eugen Block
Hi, did you find an explanation for this? I saw something similar on a customer's cluster. They reprovisioned OSDs (I don't know if any OSD-ID was reused) on one host with smaller disk sizes (size was changed through the raid controller to match the other hosts in that cluster) and they

[ceph-users] Re: objects misplaced jumps up at 5%

2020-09-29 Thread Jake Grimmett
Hi Paul, I think you found the answer! When adding 100 new OSDs to the cluster, I increased both pg and pgp from 4096 to 16,384 ** [root@ceph1 ~]# ceph osd pool set ec82pool pg_num 16384 set pool 5 pg_num to 16384 [root@ceph1 ~]# ceph osd pool set ec82pool

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-29 Thread Eugen Block
They're still in the same root (default) and each host is member of both device-classes, I guess you have a mixed setup (hosts c01/c02 have both HDDs and SSDs)? I don't think this separation is enough to avoid remapping even if a different device-class is affected (your report confirms

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-29 Thread Marc Roos
I have practically a default setup. If I do a 'ceph osd crush tree --show-shadow' I have a listing like this[1]. I would assume from the hosts being listed within the default~ssd and default~hdd, they are separate (enough)? [1] root default~ssd host c01~ssd .. .. host c02~ssd ..

[ceph-users] Get Easy Fixes for HP Printer Technical Errors

2020-09-29 Thread pa6724453
Our technical experts have many years of troubleshooting experience and great technical skills of handling different types of issues associated to HP printer. HP Printer Support team is technically experienced to identify the main reasons of HP printer not working issue and apply the permanent