> On Feb 28, 2024, at 17:55, Joel Davidow <jdavi...@nso.edu> wrote:
>
> Current situation
> -----------------
> We have three Ceph clusters that were originally built via cephadm on octopus
> and later upgraded to pacific. All osds are HDD (will be moving to wal+db on
> SSD) and were resharded after the upgrade to enable rocksdb sharding.
>
> The value for bluefs_shared_alloc_size has remained unchanged at 65535.
>
> The value for bluestore_min_alloc_size_hdd was 65535 in octopus but is
> reported as 4096 by ceph daemon osd.<id> config show in pacific.
min_alloc_size is baked into a given OSD when it is created. The central
config / runtime value does not affect behavior for existing OSDs. The only
way to change it is to destroy / redeploy the OSD.
There was a succession of PRs in the Octopus / Pacific timeframe around default
min_alloc_size for HDD and SSD device classes, including IIRC one temporary
reversion.
> However, the osd label after upgrading to pacific retains the value of 65535
> for bfm_bytes_per_block.
OSD label?
I'm not sure if your Pacific release has the back port, but not that along ago
`ceph osd metadata` was amended to report the min_alloc_size that a given OSD
was built with. If you don't have that, the OSD's startup log should report it.
-- aad
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io