Hi,

I've read the recommendation from CERN about the number of OSD maps (
https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf, page
3) and I would like to know if there is any negative impact from these
changes:

[global]
osd map message max = 10

[osd]
osd map cache size = 20
osd map max advance = 10
osd map share max epochs = 10
osd pg epoch persisted max stale = 10


We are running Hammer with nowhere closer to 7000 OSDs, but I don't want to
waste memory on OSD maps which are not needed.

Are there are large production deployments running with these or similar
settings?

Thank you,
George
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to