[ceph-users] Re: RAM recommendation with large OSDs?

2019-10-03 Thread Anthony D'Atri
It’s not that the limit is *ignored*; sometimes the failure of the subtree isn’t *detected*. Eg., I’ve seen this happen when a node experienced kernel weirdness or OOM conditions such that the OSDs didn’t all get marked down at the same time, so the PGs all started recovering. Admitedly it’s b

[ceph-users] Re: RAM recommendation with large OSDs?

2019-10-03 Thread Darrell Enns
d? Do these exceptions also apply to mon_osd_min_in_ratio? Is this in the docs somewhere? -Original Message- From: Anthony D'Atri Sent: Wednesday, October 02, 2019 7:46 PM To: Darrell Enns Cc: Paul Emmerich ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: RAM recommendation with larg

[ceph-users] Re: RAM recommendation with large OSDs?

2019-10-02 Thread Anthony D'Atri
This is in part a question of *how many* of those dense OSD nodes you have. If you have a hundred of them, then most likely they’re spread across a decent number of racks and the loss of one or two is a tolerable *fraction* of the whole cluster. If you have a cluster of just, say, 3-4 of these

[ceph-users] Re: RAM recommendation with large OSDs?

2019-10-02 Thread Darrell Enns
Thanks Paul. I was speaking more about total OSDs and RAM, rather than a single node. However, I am considering building a cluster with a large OSD/node count. This would be for archival use, with reduced performance and availability requirements. What issues would you anticipate with a large OS

[ceph-users] Re: RAM recommendation with large OSDs?

2019-10-01 Thread Paul Emmerich
The problem with lots of OSDs per node is that this usually means you have too few nodes. It's perfectly fine to run 60 OSDs per node if you got a total of 1000 OSDs or so. But I've seen too many setups with 3-5 nodes where each node runs 60 OSDs which makes no sense (and usually isn't even cheaper

[ceph-users] Re: RAM recommendation with large OSDs?

2019-10-01 Thread Paul Emmerich
On Tue, Oct 1, 2019 at 6:12 PM Darrell Enns wrote: > > The standard advice is “1GB RAM per 1TB of OSD”. Does this actually still > hold with large OSDs on bluestore? No > Can it be reasonably reduced with tuning? Yes > From the docs, it looks like bluestore should target the “osd_memory_targ