[ceph-users] Re: [Warning Possible spam] Re: [Warning Possible spam] Re: Ceph Bluestore tweaks for Bcache

2022-04-07 Thread Richard Bade
Hi Frank, Yes, I think you have got to the crux of the issue. > - some_config_value_hdd is used for "rotational=0" devices and > - osd/class:hdd values are used for "device_class=hdd" OSDs, The class is something that is user defined and you can actually define your own class names. By default the

[ceph-users] Low performance on format volume

2022-04-07 Thread Iban Cabrillo
Dear, Some users are noticing a low performance, especially when formatting large volumes (around 100GB), apparently the system is healthy and no errors are detected in the logs: [root@cephmon01 ~]# ceph health detail HEALTH_OK except this one that I see repeatedly in one of the OSD servers

[ceph-users] Re: Ceph status HEALT_WARN - pgs problems

2022-04-07 Thread Dominique Ramaekers
Hi Eugene, I recreated on hosts hvs001 and hvs003 the osd on lvm volumes with the exact same size. Now, everything seems great. The osd's on hvs002 are still larger but I see no complaints from ceph, so I leave it now like it is. Thanks for the help (especially from stopping me to try complex

[ceph-users] Re: Ceph PGs stuck inactive after rebuild node

2022-04-07 Thread Dominique Ramaekers
Hi, I used te procedure below te remove my osd's (te be able te recreate them). I just want to add that I had to call 'cephadm rm-daemon --name osd.$id --fsid $fsid --force' to remove the osd service. If I didn't do that, I got a warning on the recreation of de osd (already exists?) and the re

[ceph-users] Re: [Warning Possible spam] Re: Ceph Bluestore tweaks for Bcache

2022-04-07 Thread Richard Bade
Hi Frank, I can't speak for the bluestore debug enforce settings as I don't have this setting but I would guess it's the same. What I do for my settings is to set them for the hdd class (ceph config set osd/class:hdd bluestore_setting_blah=blahblah. I think that's the correct syntax, but I'm not cu

[ceph-users] Re: Ceph status HEALT_WARN - pgs problems

2022-04-07 Thread Eugen Block
The PGs are not activating because of the uneven OSD weights: root@hvs001:/# ceph osd tree ID CLASS WEIGHT TYPE NAMESTATUS REWEIGHT PRI-AFF -1 1.70193 root default -3 0.00137 host hvs001 0hdd 0.00069 osd.0up 1.0 1.0 1hdd

[ceph-users] Re: Ceph status HEALT_WARN - pgs problems

2022-04-07 Thread Dominique Ramaekers
Hi Eugen, You say I don't have to worry about changing pg_num manualy. Makes sense. Does this also count for pg_num_max? Will the pg_autoscaler also change this parameter if nescessary? Below the output you requested root@hvs001:/# ceph -s cluster: id: dd4b0610-b4d2-11ec-bb58-d1b32ae

[ceph-users] Re: Ceph status HEALT_WARN - pgs problems

2022-04-07 Thread Eugen Block
Hi, please add some more output, e.g. ceph -s ceph osd tree ceph osd pool ls detail ceph osd crush rule dump (of the used rulesets) You have the pg_autoscaler enabled, you don't need to deal with pg_num manually. Zitat von Dominique Ramaekers : Hi, My cluster is up and running. I saw a